I am using Neo4j on a remote server (ubuntu 20.4) and would like to stream data from MongoDB to Neo4j. I followed the instructions here. I tried both ways by using the following approaches:
Use the following command:
sudo wget https://github.com/neo4j-contrib/neo4j-apoc-procedures/releases/tag/4.3.0.7/apoc-mongodb-dependencies-4.3.0.7.jar -O /mnt/neo4j/plugins/apoc-mongodb-dependencies-4.3.0.7.jar
Note that the plugins directory has a different path due to mounting. I changed the path in the configuration file accordingly. This should not be causing any problems because I had the same problem before mounting.
Also, I tried to match the same release as the apoc-core file (4.4.0.3) in a separate attempt with no better outcome.
Changing the ownership and read permissions as follows didn't help either:
sudo chown neo4j:neo4j apoc-mongodb-dependencies-4.4.0.3.jar
sudo chmod 755 apoc-mongodb-dependencies-4.4.0.3.jar
Use the following commands:
sudo wget https://repo1.maven.org/maven2/org/mongodb/mongo-java-driver/3.12.11/mongo-java-driver-3.12.11.jar -O /mnt/neo4j/plugins/mongo-java-driver-3.12.11.jar
sudo wget https://repo1.maven.org/maven2/org/mongodb/mongodb-driver/3.12.11/mongodb-driver-3.12.11.jar -O /mnt/neo4j/plugins/mongodb-driver-3.12.11.jar
sudo wget https://repo1.maven.org/maven2/org/mongodb/mongodb-driver-core/4.7.1/mongodb-driver-core-4.7.1.jar -O /mnt/neo4j/plugins/mongodb-driver-core-4.7.1.jar
sudo wget https://repo1.maven.org/maven2/org/mongodb/bson/4.7.1/bson-4.7.1.jar -O /mnt/neo4j/plugins/bson-4.7.1.jar
Note that I used the latest versions. I tried the versions available in the instructions as well with no difference in the outcome.
Now when restarting the neo4j.service, I no longer can access the cypher-shell nor the browser. In the first case, I get "connection refused", while I get a blank page in the browser case. When I check the status, the service is active and running. But I noticed that it is missing a line compared to when I don't have the dependencies.
Starting...
This instance is ServerId{#}
======== Neo4j 4.4.5 ======== (This line is missing with the dependencies downloaded!)
When I delete the dependencies from the plugins directory and restart, everything goes back to normal and functions as expected. One more thing to note is that apoc-core procedures work just fine!
I don't know if I'm doing something wrong here or if there is some sort of underlying problem!
Related
When I executed sudo apt update I'm getting
Reading package lists... Done
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (20: Not a directory)
Also, I was getting a status error which I solved using
sudo cp /var/lib/dpkg/status-old /var/lib/dpkg/status
I tried sudo mkdir /var/lib/apt/lists/partial as suggested in few other threads
mkdir: cannot create directory ‘/var/lib/apt/lists/partial’: Not a directory
Even I tried sudo mkdir /var/lib/apt/lists/
Any other solution?
The answer may be inappropriate here. But as I came here others may land here too.
If you're using docker and you face the same issue you can do like the following.
USER root
# RUN commands
USER 1001
Reference: Link
You can try adding -u 0 in the command
sudo docker exec -u 0 -it ContainerID bin/bash
According to Docker, the u flag defines what username or UID in the system for the container to run as, setting -u 0 means you run the container as root, use it with caution! Reference here
The same happened to me. I follow as guide this answer:The package lists or status file could not be parsed or opened
I assumed my lists were corrupted. I went to /var/lib/apt/ I saw a file (lists#) instead of a directory. I deleted it (sudo rm lists) and re-created the path (sudo mkdir -p /var/lib/apt/lists/partial). Double-check the path gets created.
I ran into the same issue while trying to build a new container and experimenting with Dockerfile for a while.
What saved me finally was just to delete all containers I've created during this process using docker rm.
I had this same issue when trying to install an Typora on Ubuntu 20.04.
I was running into the error whenever I run the command below:
# add Typora's repository
sudo add-apt-repository 'deb https://typora.io/linux ./'
Here's how I solved it:
I disconnected and reconnected my network connection, and when I ran the command again, it worked fine.
I think it was an issue with my network connectivity.
That's all.
I hope this helps
I had a similar error when using bitnami spark image and docker exec command with arguments -u didn't work for me. I found my answer in the image documentation here.
In case you are using a docker image, it might be that the image is a non root container image. Read the documents of the docker image provider to find the solution to see how you can use the image as a root container image.
this is how it works access as root in docker bash and install your apps
get id container by name
sudo docker ps -aqf "name=name=es01"
access bash as root
sudo docker exec -u 0 -it 3d42134dfd59 bash
Example install:
apt get update
apt-get install nano
You first need to have super user privilege by typing in sudo -i and then inserting your password.
I'm new to Postgres.
I updated the Dockerfile I use and successfully installed Postgresql on it. (My image runs Ubuntu 16.04 and I'm using Postgres 9.6.)
Everything worked fine until I tried to move the database to a Volume with docker-compose (that was after making a copy of the container's folder with cp -R /var/lib/postgresql /somevolume/.)
The issue is that Postgres just keeps crashing, as witnessed by supervisord:
2017-07-26 18:55:38,346 INFO exited: postgresql (exit status 1; not expected)
2017-07-26 18:55:39,355 INFO spawned: 'postgresql' with pid 195
2017-07-26 18:55:40,430 INFO success: postgresql entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-26 18:55:40,763 INFO exited: postgresql (exit status 1; not expected)
2017-07-26 18:55:41,767 INFO spawned: 'postgresql' with pid 197
2017-07-26 18:55:42,841 INFO success: postgresql entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-26 18:55:43,179 INFO exited: postgresql (exit status 1; not expected)
(and so on…)
Logs
It's not clear to me what's happening as /var/log/postgresql remains empty.
chown?
I suspect it has to do with the user. If I compare the data folder inside the container and the copy I made of it to the volume, the only difference is that the original is owned by postgres while the copy is owned by root.
I tried running chown -R postgres:postgres on the copy. The operation was performed successfully, however postmaster.pid remains owned by root and I think that would be the issue.
Questions
How can I get more information about the cause of the crash?
How can I make it so that postmaster.id be owned by postgres ?
Should I consider running postgres with root instead?
Any hint welcome.
EDIT: links to the Dockerfile and the docker-compose.xml.
I'll answer my own question:
Logs & errors
What made matters more complicated was that I was not getting any specific error message.
To change that, I disabled the [program:postgresql] section in supervisord and, instead, started postgres manually from the command-line (thanks to Miguel Marques for setting me on the right track with his comment.)
Then I finally got some useful error messages:
2017-08-02 08:27:09.134 UTC [37] LOG: could not open temporary statistics file "/var/run/postgresql/9.6-main.pg_stat_tmp/global.tmp": No such file or directory
Fixing the configuration
I fixed the error above with this, eventually adding them to my Dockerfile:
mkdir -p /var/run/postgresql/9.6-main.pg_stat_tmp
chown postgres.postgres /var/run/postgresql/9.6-main.pg_stat_tmp -R
(Kudos to this guy for the fix.)
To make the data permanent, I also had to do this, for the volume to be accessible by postgres:
mkdir -p /var/lib/postgresql/9.6/main
chmod 700 /var/lib/postgresql/9.6/main
I also used initdb to initialize the data directory. BEWARE! This will erase any data found in that folder. Like so:
rm -R /var/lib/postgresql/9.6/main/*
ls /var/lib/postgresql/9.6/main/
/usr/lib/postgresql/9.6/bin/initdb -D /var/lib/postgresql/9.6/main
Testing
After the above, I could finally run postgres properly. I used this command to run it and test from the command-line:
su postgres
/usr/lib/postgresql/9.6/bin/postgres -D /var/lib/postgresql/9.6/main -c config_file=/etc/postgresql/9.6/main/postgresql.conf # as per the Docker docs
To test, I kept it running and then, from another prompt, checked everything ran fine with this:
su postgres
psql
CREATE TABLE cities ( name varchar(80), location point );
INSERT INTO cities VALUES ('San Francisco', '(-194.0, 53.0)');
select * from cities; # repeat this command after restarting the container to check that the data does persist
…making sure to restart the container and test again to check the data did persist.
And then finally restored the [program:postgresql] section in supervisord, rebuilt the image and restarted the container, making sure everything ran fine (in particular supervisord: tail /var/log/supervisor/supervisord.log), which it did.
(The command I used inside of supervisord.conf is also /usr/lib/postgresql/9.6/bin/postgres -D /var/lib/postgresql/9.6/main -c config_file=/etc/postgresql/9.6/main/postgresql.conf, as per this Docker article and other postgres+supervisord examples. Other options would have been using pg_ctl or an init.d script, but it's not clear to me why/when one would use those.)
I spent a lot of time on this. Hopefully the detailed answer will help someone down the line.
P.S.: I did end up producing a minimal example of my issue. If that can help anyone, here they are: Dockerfile, supervisord.conf and docker-compose.yml.
I do not know if this would be another way to achieve the same result (I'm new on Docker and Postgres too), but have you try the oficial repository image for Postgres (https://hub.docker.com/_/postgres/)?
I'm getting the data out of the container setting the environment variable PGDATA to '/var/lib/postgresql/data/pgdata' and binding this to an external volume on the run command:
docker run --name bd_TEST --network=my_network --restart=always -e POSTGRES_USER="superuser" -e POSTGRES_PASSWORD="myawesomepass" -e PGDATA="/var/lib/postgresql/data/pgdata" -v /var/local/db_data:/var/lib/postgresql/data/pgdata -itd -p 5432:5432 postgres:9.6
When the volume is empty, all the files are created by the image startup script, and if they already exist, the database start to used it.
From past experience I can see what may be a problem. I can't say if this will help but it is worth a try.
I would have added this as a comment, but I can't because my rep isn't hight enough.
I've spied a couple problems with how you have structured your statements in your Dockerfile. You have installed various things multiple times and also updated sporadically through the code. In my own files i've noticed that this can lead to somewhat random behaviour of my services and installation because of the different layers.
This may not seem to solve your problem directly, but cleaning up your file as is outlined in the best practices has solved many Dockerfile problems for me in the past.
One of the first places upon finding such problems is to start here at the best practices for RUN. This has helped me solve tricky problems in the past and I hope it'll solve or at least make it easier.
Pay special attention to this part:
After building the image, all layers are in the Docker cache. Suppose you later modify apt-get install by adding extra package:
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y curl nginx
Docker sees the initial and modified instructions as identical and reuses the cache from previous
steps. As a result the apt-get update is NOT executed because the
build uses the cached version. Because the apt-get update is not run,
your build can potentially get an outdated version of the curl and
nginx packages.
After reading this I would start by consolidating all your dependencies.
In my case, having the same error, I debugged it until I found out:
the disk was full and I increased the diskspace to solve this.
(stupid error, easy fix - maybe reading this here helps someone not wasting time)
also linking this questiong for other options:
Supervisord "exit status 1 not expected" running php script
https://serverfault.com/questions/537773/supervisor-process-exits-with-exit-status-1-not-expected/1076115#1076115
This is really frustrating me. I have a DO VPS with ubuntu 14.04 (64) installed.
I installed VestaCP as control panel on that and have hosted some PHP based personal project.
I also installed meteor on it but never used, now when I am trying to create a project and run it ('meteor create rt' then 'cd rt' then 'meteor')
It is giving the following error :
[[[[[ /home/admin/code/rt ]]]]]
=> Started proxy.
Unexpected mongo exit code 1. Restarting.
Unexpected mongo exit code 1. Restarting.
Unexpected mongo exit code 1. Restarting.
Can't start Mongo server.
root#RD:/home/admin/code/rt#
Could anyone please help? Please ask me for more informations if required.
**** EDIT ****
I created a fresh DigitalOcean server and it is giving the same error on that. Some issue with Digital Ocean? File System of Digital Ocean? I am confused. I am trying it on different flavours of Linux and same result. All are fresh linux installations.
I finally got the solution. Posting it here for others.
This was the problem as a few environment variables which mongodb looks for while starting was not set
Set the variables LC_ALL and LANG and it works fine (mostly setting LC_ALL will do)
first, type locale command and see the output, you will see that it will say something about LC_ALL not set.
Now, add these two lines in /etc/environment and it worked.
LC_ALL=en_US.UTF-8
LANG=en_US.UTF-8
This solution is for Ubuntu 12.04 +
Other variants may require similar work.
Unexpected mongo exit code 1 is still an uncaught exception as far as i think.
You can try by updating your c/c++ compilers uptodate. Have a look here.
It says :
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get install gcc-4.6
sudo apt-get install g++-4.6
All the best!
So we have narrowed the issue down to meteor's mongo installation on your box (though I think we were pretty sure of this all along). Let's attempt to debug that a bit. The way I have done this in the past is to try to open meteor's mongo with the mongod provided by meteor. You will perform these procedures without running the meteor server. This should give you the warning that is causing Mongo to exit. First you need to find this. In my instance installed on Mint (which should be similar to Ubuntu) it is at:
~/.meteor/packages/meteor-tool/.1.1.3.4sddkj++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/mongodb/bin/mongod
You can look at that location on your Ubuntu box or you can run something like this to get the location:
find ~/.meteor/ -name mongod
Once you find the location then go to the directory of your meteor project you are attempting to run and in that directory you should find this location:
<your meteor project>/.meteor/local
cd into that directory and run the following command:
~/.meteor/packages/meteor-tool/.1.1.3.4sddkj++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/mongodb/bin/mongod --dbpath ./
From there you can analyze the output (or update the question so we can see the output) and this should show you the mongo error you are receiving on startup and allow you to fix it.
I've got the same issues trying to start a meteor app and exactly the mongodb server is being terminated in an unexpectly manner. Generally the virtual linux server from some dealers like the one you mentioned are coming without a swap partition (check in /etc/fstab file) so if you have not enough memory to allocate MongoDB server then meteor app can't be started. You can create a swap partition or instal swapspace
sudo apt-get install swapspace
After that I was able to start the meteor app... Just be patient as swap memory is not as faster as RAM.
Since due some "smart" StackExchange policy I cannot up-vote or comment to working solution...)
Quoted answer works also on Digital Ocean on CentOS 7 x64 vmlinuz-3.10.0-123.8.1.el7.x86_64
first, type locale command and see the output, you will see that it will say something about LC_ALL not set.
Now, add these two lines in /etc/environment and it worked.
I changed the locale setting to match my needs.
Fixed on my Debian 8 with the following bash command, (use sudo if needed)
localedef -i en_US -f UTF-8 en_US.UTF-8
Ever since I changed the dbpath in /etc/mongodb.conf, MongoDB has not been starting automatically, nor using the new dbpath. Prior to the change, MongoDB would be running when the computer started and I was able to simply run the command mongo to get into the console or start my Ruby on Rails server with no issues.
After I made the modification (in order to switch to a new drive with more space), the only way I can get everything to work is by manually running the command mongod --config /etc/mongodb.conf. If I don't run that, it doesn't seem like the service is running and running without the --config option give me the following error: ERROR: dbpath (/data/db/) does not exist. even though the config file says nothing about data/db.
Some other notes:
In addition to changing /etc/mongodb.conf, I moved all files out of /var/lib/mongodb and into /home/nick/appdev/mongodb.
I changed the owner and group from root to nick. Tried changing it back, but it didn't seem to fix anything.
I'm running Ubuntu 12.10 Beta 1 and Mongo 2.2.0 with Ruby on Rails 3.2.8
A late follow up on the above question...
I had a similar issue after moving the db to an ebs on ec2.
It turns out that just running mongod still directs the dbpath to /data/db/ (which exists).
The /etc/mongodb.conf is completely ignored unless specifically directed to.
I manage to work around this by using the directive --config or just the --dbpath(both work)
But was left wondering where does mongod takes it defaults from...?!
I was unable to locate and override these defaults anywhere.
Anyone ?
Note:
I am really annoyed by this behaviour of mongod...This is just bad design,and bad documentation.
It turns out that I needed to set the owner and group to mongodb. When I transferred the files to the new directory, I had set the owner and group to my user account nick and also tried root, neither of which worked.
To do so, here are the following commands:
sudo chown mongodb /home/nick/appdev/mongodb -R
sudo chgrp mongodb /home/nick/appdev/mongodb -R
To confirm that it worked, you can check the file permissions with:
ls -l /home/nick/appdev/mongodb
After checking all permission in the data, journal and log folders as suggested, my problem was solved by giving permission to a lock file in the /tmp folder
sudo chown mongod:mongod mongodb-27017.sock
I was running it as a AWS Amazon Linux instance. I figured that out by executing as the mongod user as below, and then, researching the error code. It might be useful for other troubleshooting.
sudo -S -u mongod mongod -f /etc/mongod.conf
MongoDB 1.6 is very old and the latest production version is 2.2, which contains a large amount of bug fixes and enhancements since 1.6.
Am I correct that you haven't installed 1.6 via a package manager such as yum or aptitude? I don't believe there are packages for 1.6 at present afaik. Therefore, mongod is behaving correctly as you have not started MongoDB with a control script.
Please see this link on configuration file options.
I find I have the wreckage of two old PostgreSQL installations on Ubuntu 10.04:
$ pg_lsclustersVersion Cluster Port Status Owner Data directory Log file
Use of uninitialized value in printf at /usr/bin/pg_lsclusters line 38.
8.4 main 5432 down /var/lib/postgresql/8.4/main /var/log/postgresql/postgresql-8.4-main.log
Use of uninitialized value in printf at /usr/bin/pg_lsclusters line 38.
9.1 main 5433 down /var/lib/postgresql/9.1/main /var/log/postgresql/postgresql-9.1-main.log
$
Attempts to perform basic functions return errors, for instance:
createuser: could not connect to database postgres: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
More information comes when I try to start the database server:
$ sudo /etc/init.d/postgresql start
* Starting PostgreSQL 9.1 database server
* Error: The cluster is owned by user id 109 which does not exist any more
...fail!
$
My question: how do I completely remove both clusters and set up a new one? I've tried removing, purging, and reinstalling postgresql, following the advice here: https://stackoverflow.com/a/2748644/621762. Now pg_lsclusters shows no clusters in existence, but the No such file or directory error persists when I try to createuser, createdb or run psql. What have I failed to do?
First, that answer you linked to was pretty unsafe - hand-editing /etc/passwd ?!? dselect where an apt wildcard would do? Crazy stuff. I'm not surprised you're having issues.
As for the no such file or directory messages: You need to make sure you have a running PostgreSQL server ("cluster") before you can use admin commands like createdb, because they make a connection to the server. The No such file or directory message is telling you that the server doesn't exist or isn't running.
Here's what's happening:
Ubuntu uses pg_wrapper to manage multiple concurrent PostgreSQL instances. The issues you're having are really with pg_wrapper.
Ideally you would've just used pg_dropcluster to get rid of the unwanted clusters. Unfortunately, by following bad advice it sounds like you've got your system into a bit of a messed-up state where the PostgreSQL packages are half-installed and kind of mangled. You need to either repair the install, or totally clean it out.
I'd clean it out. I'd recommend:
Verify that pg_lsclusters lists no database clusters
apt-get --purge remove postgresql\* - this is important
Remove /etc/postgresql/
Remove /etc/postgresql-common
Remove /var/lib/postgresql
userdel -r postgres
groupdel postgres
apt-get install postgresql-common postgresql-9.1 postgresql-contrib-9.1 postgresql-doc-9.1
It's possible that the apt-get --purge step will fail because you've removed the user IDs, etc. Re-creating the postgres user ID with useradd -r -u 109 postgres should allow you to re-run the purge successfully then delete the user afterwards.
This answer is not directly about removing a postgres instance, rather, about resoliving the issue,
Error: The cluster is owned by user ...
I got this error while trying to spin up a docker container pointed to a postgres data directory that was produced via a different container (on a different host machine).
The error is directly related to directory ownership. In my case, the system was unable to find the user that certain postgres directories was owned by in the current environment. By re-owning those directories to the right user resolves the issue. Following is an example mapping (that worked for me):
chown -R postgres:postgres /var/lib/postgresql
chown -R postgres:postgres /etc/postgresql
chown -R postgres:postgres /var/log/postgresql
chown -R postgres:postgres /var/run/postgresql