gitlab 502 error after configuring additional postgres in different path - postgresql

I have gitlab installed with default postgres db and it was working fine until I had to install another ALM tool for which postgres was bundled along with it but its installed in different path. So now I have 2 postgres setup but I have configured both of them to a different port but still I am unable to open the gitlab as I am getting 502 error with "Whoops, GitLab is taking too much time to respond" .
Please find the output of the command $sudo gitlab-ctl tail unicorn
==> /var/log/gitlab/unicorn/unicorn_stderr.log <==
E, [2019-09-03T17:33:28.047856 #20275] ERROR -- : adding listener failed addr=127.0.0.1:8080 (in use)
E, [2019-09-03T17:33:28.047963 #20275] ERROR -- : retrying in 0.5 seconds (4 tries left)
E, [2019-09-03T17:33:28.549091 #20275] ERROR -- : adding listener failed addr=127.0.0.1:8080 (in use)
E, [2019-09-03T17:33:28.549304 #20275] ERROR -- : retrying in 0.5 seconds (3 tries left)
E, [2019-09-03T17:33:29.050605 #20275] ERROR -- : adding listener failed addr=127.0.0.1:8080 (in use)
E, [2019-09-03T17:33:29.050778 #20275] ERROR -- : retrying in 0.5 seconds (2 tries left)
E, [2019-09-03T17:33:29.551638 #20275] ERROR -- : adding listener failed addr=127.0.0.1:8080 (in use)
E, [2019-09-03T17:33:29.551781 #20275] ERROR -- : retrying in 0.5 seconds (1 tries left)
E, [2019-09-03T17:33:30.052684 #20275] ERROR -- : adding listener failed addr=127.0.0.1:8080 (in use)
E, [2019-09-03T17:33:30.052855 #20275] ERROR -- : retrying in 0.5 seconds (0 tries left)
E, [2019-09-03T17:33:30.553731 #20275] ERROR -- : adding listener failed addr=127.0.0.1:8080 (in use)
==> /var/log/gitlab/unicorn/unicorn_stdout.log <==
bundler: failed to load command: unicorn (/opt/gitlab/embedded/bin/unicorn)
==> /var/log/gitlab/unicorn/unicorn_stderr.log <==
Errno::EADDRINUSE: Address already in use - bind(2) for 127.0.0.1:8080
/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/unicorn-5.4.1/lib/unicorn/socket_helper.rb:164:in `bind'
/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/unicorn-5.4.1/lib/unicorn/socket_helper.rb:164:in `new_tcp_server'
/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/unicorn-5.4.1/lib/unicorn/socket_helper.rb:144:in `bind_listen'
/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/unicorn-5.4.1/lib/unicorn/http_server.rb:241:in `listen'
/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/unicorn-5.4.1/lib/unicorn/http_server.rb:851:in `block in bind_new_listeners!'
/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/unicorn-5.4.1/lib/unicorn/http_server.rb:851:in `each'
/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/unicorn-5.4.1/lib/unicorn/http_server.rb:851:in `bind_new_listeners!'
/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/unicorn-5.4.1/lib/unicorn/http_server.rb:140:in `start'
/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/unicorn-5.4.1/bin/unicorn:126:in `<top (required)>'
/opt/gitlab/embedded/bin/unicorn:23:in `load'
/opt/gitlab/embedded/bin/unicorn:23:in `<top (required)>'
==> /var/log/gitlab/unicorn/current <==
2019-09-03_12:03:30.59723 master failed to start, check stderr log for details
2019-09-03_12:03:31.60724 failed to start a new unicorn master
2019-09-03_12:03:31.64104 starting new unicorn master
changing the default port for unicorn and gitlab

The issue got resolved after i moved the another application to different port number and restarted the server

According to the error messages, it is not the additional PostgreSQL causing problems. What port is the other application you installed listening on? I assume it's also listening on port 8080 just like GitLab Unicorn does.
Errno::EADDRINUSE: Address already in use - bind(2) for 127.0.0.1:8080
Unicorn can't start because something else is listening on port 8080. You should reconfigure your other application so it listens on a different port than GitLab.
If you would rather change the Unicorn port, you can do so by adding/changing the following configuration in /etc/gitlab/gitlab.rb.
unicorn['port'] = 8080 # Change this to something else.
After this change you will need to run sudo gitlab-ctl reconfigure.
It may be beneficial for you to consider installing this new application on a new server/VM.

Related

unexpected at this time Jdeveloper12c

I install Jdeveloper 12 and i setup the server, when i start the server this appear in log
"* Using HTTP port 7102
Using SSL port 7107 *
C:\Users\MYPC\AppData\Roaming\JDeveloper\system12.2.1.2.42.170105.1224\DefaultDomain\bin\startWebLogic.cmd
[Starting IntegratedWebLogicServer.]
[waiting for the server to complete its initialization...]
C:\Program was unexpected at this time.
Process exited.
[IntegratedWebLogicServer terminated.]"
What exactly this error, and how i can solve it?

HAProxy 1.8 - Passing socket connection during HAProxy soft reload

I am using kubernetes load-lanacer(Here the haproxy configuration is written in every 10s and restarted). Since I want to pass the socket connection while reloading the HAProxy, I changed the Dockerfile of the HAProxy such that it uses HAProxy 1.8-dev2 version. The image used is haproxytech/haproxy-ubuntu:1.8-dev2. Also I added the following line under the global section of the template.cfg file(This is the template in which the HAProxy configuration is written)
stats socket /var/run/haproxy/admin.sock mode 660 level admin expose-fd listeners
Also I changed the reload command in haproxy_reload file as follows
haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -x /var/run/haproxy/admin.sock -sf $(cat /var/run/haproxy.pid)
Once I run the docker image I get the following error.(kubectl create -f rc.yaml --namespace load-balancer)
W1027 07:13:37.922565 5 service_loadbalancer.go:687] Requeuing kube-system/kube-dns because of error: error restarting haproxy -- [WARNING] 299/071337 (21) : We didn't get the expected number of sockets (expecting 1347703880 got 0)
[ALERT] 299/071337 (21) : Failed to get the sockets from the old process!
: exit status 1
FYI:
I commented the stats socket line in the template.cfg file and ran the docker image to verify whether the restart command identifies the socket. The same error occurred. Seems like the soft restart command doesn't identify the stats socket created by the HAProxy.

Nodebb web app has failed to start and just crashes

I managed to install the nodebb forum very well and i was able to use it... On the next day , it failed to start and just crashes. I think my problem was that connect mongo cannot easily connect to mongodb database. probably help me look at this error.
16/2 20:26 [4171] - info: Launching web installer on port 4567
events.js:72
throw er; // Unhandled 'error' event
Error: listen EADDRINUSE
at errnoException (net.js:905:11)
at Server._listen2 (net.js:1043:14)
at listen (net.js:1065:10)
at Server.listen (net.js:1139:5)
at EventEmitter.listen (/home/goldsoft25/Desktop/NodeJS/NodeBB-master/node_modules/express/lib/application.js:617:24)
at launchExpress (/home/goldsoft25/Desktop/NodeJS/NodeBB-master/install/web.js:53:15)
This is a very common error for any of application running on top of node. Both applications are pointing to same port, 4567 in your case .
You will have to change the port of one application or kill application when swapping between each app.

Haskell: Testing connection availability N times with a delay (scotty to mongodb)

I have a stupid problem with scotty web app and mongodb service starting in the right order.
I use systemd to start mongodb first and then the scotty web app. It does not work for some reason. The app errors out with connect: does not exist (Connection refused) from the mongodb driver meaning that the connection is not ready.
So my question. How can I test the connection availability say three times with 0.5s interval and only then error out?
This is the application main function
main :: IO ()
main = do
pool <- createPool (runIOE $ connect $ host "127.0.0.1") close 1 300 5
clearSessions pool
let r = \x -> runReaderT x pool
scottyT 3000 r r basal
basal :: ScottyD ()
basal = do
middleware $ staticPolicy (noDots >-> addBase "static")
notFound $ runSession
routes
Although the app service is ordered after mongodb service the connection to mongodb is still unavailable during the app start up. So I get the above mentioned error.
This is the systemd service file to avoid questions regarding the correct service ordering.
[Unit]
Description=Basal Web Application
Requires=mongodb.service
After=mongodb.service iptables.service network-online.target
[Service]
User=http
Group=http
WorkingDirectory=/srv/http/basal/
ExecStart=/srv/http/basal/bin/basal
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
I don't know why connection to mongodb is not available given the correct service order.
So I want to probe connection availability withing haskell code three times with 0.5s delay and then error out. How can I do it?
Thanks.
I guess from the functions you're using that you're using something like mongoDB 1.5.0.
Here, connect returns something in the IOE monad, which is an alias for ErrorTIOErrorIO.
So the best approach is to use the retrying mechanisms ErrorT offers. As it's an instance of MonadPlus, we can just use mplus if we don't care about checking for the specific error:
retryConnect :: Int -> Int -> Host -> IOE Pipe
retryConnect retries delayInMicroseconds host
| retries > 0 =
connect host `mplus`
(liftIO (threadDelay delayInMicroseconds) >>
retryConnect (retries - 1) delayInMicroseconds host)
| otherwise = connect host
(threadDelay comes from Control.Concurrent).
Then replace connect with retryConnect 2 500000 and it'll retry twice after the first failure with a 500,000 microsecond gap (i.e. 0.5s).
If you do want to check for a specific error, then use catchError instead and inspect the error to decide whether to swallow it or rethrow it.

gitlab vagrant vm rake:test aborted

i'm trying to install the gitlab-vagrant-vm from these repository: https://github.com/gitlabhq/gitlab-vagrant-vm
if i'm running
bundle exec rake gitlab:test it runs the test with all it's cases and checks.
but the summary gives me following lines of error.
Error summary:
Errors (1)
Project Issue Tracker :: I set the issue tracker to "Redmine" :: And change the issue tracker to "Redmine"
Steps Summary: (1379) Successful, (0) Undefined, (0) Pending, (0) Failed, (1) Error
Coverage report generated for RSpec to /vagrant/gitlabhq/coverage. 4937 / 7059 LOC (69.94%) covered.
rake aborted!
Command failed with status (1): [/opt/rbenv/versions/2.0.0-p247/bin/ruby -S...]
Tasks: TOP => spinach
(See full trace by running task with --trace)
rake aborted!
rake spinach failed!
/vagrant/gitlabhq/lib/tasks/gitlab/test.rake:15:in `block (3 levels) in '
/vagrant/gitlabhq/lib/tasks/gitlab/test.rake:12:in `each'
/vagrant/gitlabhq/lib/tasks/gitlab/test.rake:12:in `block (2 levels) in '
Tasks: TOP => gitlab:test
(See full trace by running task with --trace)
vagrant#precise32:/vagrant/gitlabhq$
what's this error about, and how to fix it? i really want to work with gitlab, and trying for several days now...
i am using:
osx 10.9 with server 3
vagrant 1.3.5
virtualbox 4.3
in the vagrant vm i ran "apt-get update / upgrade" to get the latest ubuntu
greetz!