I cannot execute any command using sudo. I get this error
-sh: sudo: command not found
First: if you are already root, you do not need sudo.
Second: if this is a yocto-based image as the question tag suggests, then there is no apt-get either. This is the "debianoid" way of installing things and does not apply to prebuilt-image based distributions as yocto provides them. So you have two options:
Change to ubuntu or debian (or any derivative thereof), then this approach will apply.
Use the yocto/OpenEmbedded way of installing things. This is unfortunately not exactly trivial, so you better get started here then: Yocto Projct Quick Start
Maybe you need to check the user you log in.
If you are the root user, you have the super access right yet.
If not, you need to change your configuration in your yocto project like this
EXTRA_USERS_PARAMS = "\
usermod -p 'passowrd' root; \
"
Related
I am using Neo4j on a remote server (ubuntu 20.4) and would like to stream data from MongoDB to Neo4j. I followed the instructions here. I tried both ways by using the following approaches:
Use the following command:
sudo wget https://github.com/neo4j-contrib/neo4j-apoc-procedures/releases/tag/4.3.0.7/apoc-mongodb-dependencies-4.3.0.7.jar -O /mnt/neo4j/plugins/apoc-mongodb-dependencies-4.3.0.7.jar
Note that the plugins directory has a different path due to mounting. I changed the path in the configuration file accordingly. This should not be causing any problems because I had the same problem before mounting.
Also, I tried to match the same release as the apoc-core file (4.4.0.3) in a separate attempt with no better outcome.
Changing the ownership and read permissions as follows didn't help either:
sudo chown neo4j:neo4j apoc-mongodb-dependencies-4.4.0.3.jar
sudo chmod 755 apoc-mongodb-dependencies-4.4.0.3.jar
Use the following commands:
sudo wget https://repo1.maven.org/maven2/org/mongodb/mongo-java-driver/3.12.11/mongo-java-driver-3.12.11.jar -O /mnt/neo4j/plugins/mongo-java-driver-3.12.11.jar
sudo wget https://repo1.maven.org/maven2/org/mongodb/mongodb-driver/3.12.11/mongodb-driver-3.12.11.jar -O /mnt/neo4j/plugins/mongodb-driver-3.12.11.jar
sudo wget https://repo1.maven.org/maven2/org/mongodb/mongodb-driver-core/4.7.1/mongodb-driver-core-4.7.1.jar -O /mnt/neo4j/plugins/mongodb-driver-core-4.7.1.jar
sudo wget https://repo1.maven.org/maven2/org/mongodb/bson/4.7.1/bson-4.7.1.jar -O /mnt/neo4j/plugins/bson-4.7.1.jar
Note that I used the latest versions. I tried the versions available in the instructions as well with no difference in the outcome.
Now when restarting the neo4j.service, I no longer can access the cypher-shell nor the browser. In the first case, I get "connection refused", while I get a blank page in the browser case. When I check the status, the service is active and running. But I noticed that it is missing a line compared to when I don't have the dependencies.
Starting...
This instance is ServerId{#}
======== Neo4j 4.4.5 ======== (This line is missing with the dependencies downloaded!)
When I delete the dependencies from the plugins directory and restart, everything goes back to normal and functions as expected. One more thing to note is that apoc-core procedures work just fine!
I don't know if I'm doing something wrong here or if there is some sort of underlying problem!
I run a centos 8 distro on docker and I would like to have bash TAB completion with dnf package manager. According to other posts, I did the following once my docker container is started:
dnf clean all && rm -r /var/cache/dnf && dnf upgrade -y && dnf update -y
and then
dnf install bash-completion sqlite -y
After doing that I restart the container but there is still no bash completion. I also tried to source directly the bash completion file by doing:
source /etc/profile.d/bash_completion.sh
but without any better effect.
Would you know what I am doing wrong ?
You shouldn't need BASH Completion in a Docker container. The only time you should be manually connecting to a shell inside a Linux container is to troubleshoot why the process running in the container is behaving abnormally. In fact, some container design advice might even go as far as suggesting you not include a shell inside your base OS at all!
The reason this isn't working for you is due to the way in which Linux containers operate. A Container is simply a namespaced process that is managed by the kernel installed on the Host OS. This process cannot be modified or interrupted or the container will be destroyed since the process will be sent a SIGTERM. When you attempt to source the bash_completion.sh script, you are attempting to pass new configuration arguments to your existing namespaced process managed by Docker.
If you really wanted to do this the best way to do it would be to create a new Docker Container Image based on the original CentOS 8 Base Image. And then from there install the bash completion package and add an echo command to add the source line to your user's .bashrc file.
EDIT:
With regards to the additional question asked OP in the comments of this answer I have added additional information below.
Why should not I need bash completion in a container
The reason you do not need bash completion in a container is because containers are not meant to be attached to with a shell. A is simply supposed to be a single instance of a process running under specific configured criteria. Containers aren't meant to be used to create dev environments for you to connect to, they're meant to run processes and applications in software infrastructure.
Manually updating & installing packages
You mention that one of the first things you do when you spin up a container is install packages. This is also alarming to me because you are not supposed to be manually interacting with a container at all. This includes package installation. Instead, you should generate a new Container Image from the older Base Image and add additional RUN statements to the Dockerfile to update the system and install these desired packages.
Cannot believe it is not possible
It is possible if you create a new Dockerfile that purposely installs it on a new layer of the base image and produces a new container image for you to use. BUT the point is that you shouldn't be connecting to Docker containers in the first place to even get to a point where you could need something like bash completion!
Here is a great summary on the difference between a container and a virtual machine that might help clarify some of this for you. In a nutshell, containers are supposed to run, and only run, processes.
When I executed sudo apt update I'm getting
Reading package lists... Done
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (20: Not a directory)
Also, I was getting a status error which I solved using
sudo cp /var/lib/dpkg/status-old /var/lib/dpkg/status
I tried sudo mkdir /var/lib/apt/lists/partial as suggested in few other threads
mkdir: cannot create directory ‘/var/lib/apt/lists/partial’: Not a directory
Even I tried sudo mkdir /var/lib/apt/lists/
Any other solution?
The answer may be inappropriate here. But as I came here others may land here too.
If you're using docker and you face the same issue you can do like the following.
USER root
# RUN commands
USER 1001
Reference: Link
You can try adding -u 0 in the command
sudo docker exec -u 0 -it ContainerID bin/bash
According to Docker, the u flag defines what username or UID in the system for the container to run as, setting -u 0 means you run the container as root, use it with caution! Reference here
The same happened to me. I follow as guide this answer:The package lists or status file could not be parsed or opened
I assumed my lists were corrupted. I went to /var/lib/apt/ I saw a file (lists#) instead of a directory. I deleted it (sudo rm lists) and re-created the path (sudo mkdir -p /var/lib/apt/lists/partial). Double-check the path gets created.
I ran into the same issue while trying to build a new container and experimenting with Dockerfile for a while.
What saved me finally was just to delete all containers I've created during this process using docker rm.
I had this same issue when trying to install an Typora on Ubuntu 20.04.
I was running into the error whenever I run the command below:
# add Typora's repository
sudo add-apt-repository 'deb https://typora.io/linux ./'
Here's how I solved it:
I disconnected and reconnected my network connection, and when I ran the command again, it worked fine.
I think it was an issue with my network connectivity.
That's all.
I hope this helps
I had a similar error when using bitnami spark image and docker exec command with arguments -u didn't work for me. I found my answer in the image documentation here.
In case you are using a docker image, it might be that the image is a non root container image. Read the documents of the docker image provider to find the solution to see how you can use the image as a root container image.
this is how it works access as root in docker bash and install your apps
get id container by name
sudo docker ps -aqf "name=name=es01"
access bash as root
sudo docker exec -u 0 -it 3d42134dfd59 bash
Example install:
apt get update
apt-get install nano
You first need to have super user privilege by typing in sudo -i and then inserting your password.
I am installing CUDA on my GPU machines. While at it, I need to disable Nouveau Kernel Driver.
I did find a solution here: https://askubuntu.com/questions/841876/how-to-disable-nouveau-kernel-driver
But update-initramfs is not found on CentOS.
I am looking for an equivalent of sudo update-initramfs -u in CentOS
If your goal is to install the latest nvidia driver to run with cuda. The best way to disable nouveau is indeed to rebuild the initramfs, as written by Gediz. Since, as spotted here (https://forums.centos.org/viewtopic.php?t=68800), it is only a 5 steps process I think it deserves to be right here :
grubby ––update-kernel=ALL ––args="rd.driver.blacklist=nouveau nouveau.modeset=0"
mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak
echo "blacklist nouveau" > /etc/modprobe.d/nouveau-blacklist.conf
dracut /boot/initramfs-$(uname -r).img $(uname -r)
reboot
I believe Nouveau driver can be easily unloaded using modprobe:
modprobe -r nouveau
Also there is an option -b which blacklists it.
-b, --use-blacklist Apply blacklist to resolved alias.
In the web address you attached there is :
option nomodeset
I guess it is a kernel option not to load !ANY display drivers.You wont always need to update initramfs, only if module is included in initramfs you need to update it.
You can check it using one of initramfs-tools by :
lsinitramfs /boot/initrd.img-4.9.0-5-amd64 |less
However if you need to update or rebuild initramfs there is a way shown in CentOs Wiki :
https://wiki.centos.org/TipsAndTricks/CreateNewInitrd
This is really frustrating me. I have a DO VPS with ubuntu 14.04 (64) installed.
I installed VestaCP as control panel on that and have hosted some PHP based personal project.
I also installed meteor on it but never used, now when I am trying to create a project and run it ('meteor create rt' then 'cd rt' then 'meteor')
It is giving the following error :
[[[[[ /home/admin/code/rt ]]]]]
=> Started proxy.
Unexpected mongo exit code 1. Restarting.
Unexpected mongo exit code 1. Restarting.
Unexpected mongo exit code 1. Restarting.
Can't start Mongo server.
root#RD:/home/admin/code/rt#
Could anyone please help? Please ask me for more informations if required.
**** EDIT ****
I created a fresh DigitalOcean server and it is giving the same error on that. Some issue with Digital Ocean? File System of Digital Ocean? I am confused. I am trying it on different flavours of Linux and same result. All are fresh linux installations.
I finally got the solution. Posting it here for others.
This was the problem as a few environment variables which mongodb looks for while starting was not set
Set the variables LC_ALL and LANG and it works fine (mostly setting LC_ALL will do)
first, type locale command and see the output, you will see that it will say something about LC_ALL not set.
Now, add these two lines in /etc/environment and it worked.
LC_ALL=en_US.UTF-8
LANG=en_US.UTF-8
This solution is for Ubuntu 12.04 +
Other variants may require similar work.
Unexpected mongo exit code 1 is still an uncaught exception as far as i think.
You can try by updating your c/c++ compilers uptodate. Have a look here.
It says :
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get install gcc-4.6
sudo apt-get install g++-4.6
All the best!
So we have narrowed the issue down to meteor's mongo installation on your box (though I think we were pretty sure of this all along). Let's attempt to debug that a bit. The way I have done this in the past is to try to open meteor's mongo with the mongod provided by meteor. You will perform these procedures without running the meteor server. This should give you the warning that is causing Mongo to exit. First you need to find this. In my instance installed on Mint (which should be similar to Ubuntu) it is at:
~/.meteor/packages/meteor-tool/.1.1.3.4sddkj++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/mongodb/bin/mongod
You can look at that location on your Ubuntu box or you can run something like this to get the location:
find ~/.meteor/ -name mongod
Once you find the location then go to the directory of your meteor project you are attempting to run and in that directory you should find this location:
<your meteor project>/.meteor/local
cd into that directory and run the following command:
~/.meteor/packages/meteor-tool/.1.1.3.4sddkj++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/mongodb/bin/mongod --dbpath ./
From there you can analyze the output (or update the question so we can see the output) and this should show you the mongo error you are receiving on startup and allow you to fix it.
I've got the same issues trying to start a meteor app and exactly the mongodb server is being terminated in an unexpectly manner. Generally the virtual linux server from some dealers like the one you mentioned are coming without a swap partition (check in /etc/fstab file) so if you have not enough memory to allocate MongoDB server then meteor app can't be started. You can create a swap partition or instal swapspace
sudo apt-get install swapspace
After that I was able to start the meteor app... Just be patient as swap memory is not as faster as RAM.
Since due some "smart" StackExchange policy I cannot up-vote or comment to working solution...)
Quoted answer works also on Digital Ocean on CentOS 7 x64 vmlinuz-3.10.0-123.8.1.el7.x86_64
first, type locale command and see the output, you will see that it will say something about LC_ALL not set.
Now, add these two lines in /etc/environment and it worked.
I changed the locale setting to match my needs.
Fixed on my Debian 8 with the following bash command, (use sudo if needed)
localedef -i en_US -f UTF-8 en_US.UTF-8