unkillable process id on macOS port 5432- [closed] - postgresql

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I am to re-install postgresql as something I was not able to log in anymore(lost password). However, every time I am trying to kill the process on corresponding port (5432), the PID changes and the port is still does not get freed. I am getting frustrated, this is taking over 2 weeks now.
Here is what I am doung:
#find the PID on 5432
sudo lsof -i: 5432 # this gives me a line where I can identify the process ID
sudo kill -9 <PID> # I use the PID given by the previous function
The last command gives a prompt asking me whether I want postgres to accept incoming network connections. Whichever option I choose (deny or allow) leads to the same thing. When I try to start postgres is still tells me that port 5432 is busy and indeed it is busy. When I re-use the first command above I notice that postgres is still there and the PID has changed.

I sorted the problem. I had other instances of postgres(9.5 I believe running in the background). I found it in my Library. now that port is completely free.

Related

Liquibase Running as a Kubernetes job is failing to get connected with Postgres container [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
Setting up minikube cluster with the postgress and liquibase.
--> postgres is deployed in the pods
--> Running liquibase job to update the postgres
kubernetes job file to run update command in liquibase:
Dockerfile to create a liquibase image:
error log:
The pod is not able to establish connection to the database. Make sure database username and password is correct. Instead of setting , localhost in LIQUIBASE_URL in DockerFile, can you provide the IP here. Also try to exec into the pod and check if you are able to ping the machine where database is hosted.
the issue is resolved .. giving the refrence of the internal end point of the Postgres pod :)

PostgreSQL strange behaviour in Ubuntu 18.04 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I installed PostgreSQL using:
sudo apt install libpq-dev postgresql postgresql-contrib
Everything is working fine at the beginning, but I need also remote connection,
so
I need to modify:
pg_hba.conf and postgresql.conf
but I make backups of them, before modifying.
Restart - sudo systemctl restart posgresql
Sometimes it works perfect
but in other cases, when I try sudo -u postgres psql I get the following error:
psql: colud not connect to the server: No such file or directory. Is
the server running locally and accepting connections on Unix domain
socket "/var/run/postgresql/.s.PGSQL.5432
It is very strange because, I change just the IP address in pg_hba.conf to allow remote connection and sometimes works with no errors and sometimes I receive the error. Also remote stop working.
I go back to the backup files, restart server(so no changes for remote in files), the error remains.
I check the service: sudo systemctl status postgresql
Is Active and working.
I have no idea what is wrong, because returned to initial files from backups I expected to fix the error. Please help
I found the errors asked multiple times, but in my case the server is active, and even returned back to backup and is not working.
I manage to solve this by the following method.
Check postgresql logs
> tail -f /var/log/postgresql/<what-ever-postgresql-log-name>.logs
If you log is showing FATAL: could not remove old lock file as follow. Then go for step 2.
2019-09-06 01:49:13.477 UTC [5439] LOG: database system is shut down
pg_ctl: another server might be running; trying to start server anyway
2019-09-06 01:51:17.668 UTC [1039] FATAL: could not remove old lock file "postmaster.pid": Permission denied
2019-09-06 01:51:17.668 UTC [1039] HINT: The file seems accidentally left over, but it could not be removed. Please remove the file by hand and try again.
pg_ctl: could not start server
Examine the log output.
Remove postmaster.pid at data_directory path.
You can check your data_directory path via
cat /etc/postgresql/*/main/postgresql.conf
Confirm your data_directory path - then issue the command below.
rm /var/lib/postgresql/10/main/postmaster.pid
Set permission for postgres to data_directory path. At my case is at /var/lib/postgresql/ -
Honestly I am still looking for a (why we still need to set permission) where by default it is already have permission for postgres user.
sudo chmod 765 /var/run/postgresql
sudo chown postgres /var/run/postgresql
Then restart service
sudo service postgresql restart
Test whether is working.
sudo -u postgres psql
Note: I am using Postgresql 10

ssh-agent across ssh sessions on shared host [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I ssh into a shared host (WebFaction) and then use ssh-agent to establish a connection to a mercurial repository (BitBucket). I call the agent like so:
eval `ssh-agent`
This then spews out the pid of the agent and sets its relevant environment variables. I then use ssh-add as follows to add my identity (after typing my passphrase):
ssh-add /path/to/a/key
My ssh connection eventually times out and I'm disconnected from the server. When I log back in, I can no longer connect to the Hg server and so I do this:
ps aux | grep 1234.*ssh-agent`
kill -SIGHUP 43210
And then repeat the two commands at the top of the post (ie. invoke the agent using eval and call ssh-add).
I'm sure that there's a well established idiom for avoiding this process and maintaining a "reference" to the agent that was spawned initially. I've tried redirecting I/O of the first command to a file (in the hope of sourcing it in my .bashrc), but I only get the agent's pid.
How can I avoid having to go through this process each time I ssh into the host?
My *NIX skills are weak, so constructive criticism on any aspect of the post is welcome, not just my use of ssh-agent.
Short answer:
With ssh-agent running locally and identities added, ssh -A user#host.webfaction.com provides the secure shell on the remote host with the local agent's identities.
Long answer:
As Charles suggested, agent forwarding is the solution.
At first, I thought that I could just issue an ssh user#host.webfaction.com and then, from within the secure session on the remote host, connect to the BitBucket repository using hg+ssh. But that failed, and so I investigated the ForwardAgent and AgentForwardingEnabled flags.
Thinking that I'd have to settle for a workaround in .bashrc that involved keeping my private key on the remote host, I went looking for a shell-script solution but was spared from this kludge by this answer in SuperUser, which is perfect and works without any client configuration (I'm not sure how the sshd server is configured on WebFaction).
Aside: in my question, I posted the following:
ps aux | grep 1234.*ssh-agent`
kill -SIGHUP 43210
but this is actually inefficient and requires the user to know his/her uid (available via /etc/passwd). pgrep is much easier:
pgrep -u username process-name

How to make MongoDB Server start on Linux Startup (CentOS) [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm using Linux CentOS 5.4, I installed MongoDB now it's availabled as a Daemon and Service
When I execute service mongod start is says : [OK] --> in green as if the service started but when I try to connect to it I find it not working.
but when I try to run "mongod" from the shell normally it starts but if I closed the shell connections it stops.
how do I add it to the start up of the OS ? or how do I run it in the background ?
add /usr/bin/mongod to /etc/rc.local this will make it start with the startup scripts
I think you need to create basic init scripts to start Mongodb as daemon and create mongodb user. Detailed information can be found here: Mongo DB installation

Mac OS X ignoring hosts file [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Mac seems to be ignoring new changes to my hosts file. Older changes work without a problem. I've spent the past 4 hours trying to figure this one out. Help!
I have folders for each site that I develop in my /Sites folder. For example, several folders are named:
wp.dev
daf.dev
test.dev
I run MAMP, set the Apache Port to 80 and the MySQL Port to 3306 (so that I don't have to add the port to the address bar in a browser).
I have edited my /private/etc/hosts file as follows:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
127.0.0.1 wp.dev
127.0.0.1 daf.dev
127.0.0.1 test.dev
fe80::1%lo0 localhost
Here's the kicker: wp.dev and daf.dev have been around for over a month. They resolve without a problem in my browser. I added test.dev this morning. When I type it into a browser it simply searches "test.dev" as opposed to resolving a domain.
I can ping any of the above domains and they go to 127.0.0.1, including test.dev.
For what it's worth, I've tried virtualhostsx with the same problem. I also run dscacheutil --flushcache and restart MAMP when making changes.
I need to kick off development on a new site, and this is driving me crazy.
Try putting all your entries at the top of the file.
Not really logical, but worth a try.