In Ubuntu, is it a good idea to launch eclipse with sudo? [closed] - eclipse

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I used maven to download mahout and hadoop recently. Because I could not seem to do that without using sudo mvn commands, eclipse could not seem to be able to use anything I had downloaded (there were lots of errors like parents of things like POM.xml being permission denied etc.) and more recently than that I was trying out mahout (with local jars downloaded directly from one of apache's mirrors, not from maven) and although I could run the class the first time, I couldn't do it again because my eclipse instance could not overwrite the file I had already written.
These are just examples of times I feel it would have been good to be running eclipse as superuser by doing
sudo eclipse
Instead of just launching it normally. The only problem I can think of is that as root eclipse suggests you use the root/workspace, but is it ok to just tell it to use yourusername/workspace?

In general- no. It's tempting, but not very good practice to do all of your development as the superuser. If you're running Eclipse as root, then you're also launching Java processes as root when you run your software. (You could change your Java run settings to sudo back to a regular user before running, but I wouldn't recommend that as a solution).
In addition to being a security risk, you are also making it difficult to track down bugs if you want to distribute the software to others to run as non-root (e.g. doing root only things like reading a protected file or using a well known port might work for you, but not for the average user).
I recommend finding the files that are causing issues and doing chmod o+r on them.

Related

Why has the code command stopped working when using sudo in WSL? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I am still able to launch files with the code command, but if I try using sudo along with it I get sudo: code: command not found. It worked fine in the past, not sure how long it's been broken for me. It was nice being able to edit .rc files in code instead of nano, but I need root privileges to save those files.
I have tried uninstalling/reinstalling the WSL extensions in VSC, adding export PATH="/usr/share/code/bin:$PATH" in my .zshrc, and adding new aliases per this guide.
sudo likely resets your environment including PATH for safety purposes (I believe this is default on Ubuntu and maybe other distros). Even if you extend PATH to include VSCode in your .zshrc it will be removed by using sudo. To verify this you can do sudo zsh and then type echo $PATH.
To keep environment you can either use sudo -E switch:
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their
existing environment variables. The security policy may return an error
if the user does not have
or run visudo and add following configuration to your sudoers file that will make it default behavior limited to PATH environment variable:
Defaults env_keep += "PATH"

Preventing brew cleanup from deleting specific old version of software [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I am a massive fan of Homebrew and have taken to using it to manage all my applications. One very useful feature is brew switch which enables switching between different versions of Ansible. Something which I require to compile some of my websites running older software.
However, I have noticed that whenever I wish to run brew cleanup, it deletes all old versions even version 2.3.2.0 of Ansible which I still require alongside the most current version.
After sifting through numerous forums and sites I have been unable to find a solution which allows me to keep this old version of Ansible and the most current when using the brew cleanup command other than deleting everything manually.
Does anyone have a workaround or solution, I thought brew pin may be a possibility, but this seems to only work with the version currently linked.
I don't see a clean built-in way with brew cleanup to do this, but a workaround: since brew cleanup optionally takes a list of formulae to clean up, we can make such a list that contains everything but Ansible.
This is how I can get that list:
brew list | grep -v ansible
And this is how I can call cleanup to ignore Ansible:
brew cleanup $(brew list | grep -v ansible)
Maybe I want that as an alias somewhere, like bca for "brew cleanup (but not) ansible":
alias bca='brew cleanup $(brew list | grep -v ansible)'
and add that line to my ~/.bashrc.

virtual environment requirements.txt [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I would like to put a requirements.txt file in my virtual environment. The two ways I have thought about are making a txt file and then moving it to the correct directory (I do not know how to find the directory: when I type workon, the PathToScripts and PathToSitePackages says C:\Users\A.virtualenvs...) and making a requirements.txt file once I am in the directory (I know not how to make a file while in the correct directory). Are any of my ideas a good way to go about solving this problem? Is there a better way to do this?
The normal thing to do is to put the requirements.txt file in the the root of your application source code. This way you can place it under version control with the rest of your application artifacts. That's what virtualenvwrapper expects you to do. It's why virtualenvwrapper distinguishes between the directories where the virtual environments are created and the working directory you specify when creating one. I understand why you might want to put requirements.txt in with the virtual environment, but it's not the usual way.
There might be a way to specify that the virtual environment directory be the same as the working directory. You could try specifying that the working directory be the same as where the virtual environment gets built when you create it, our again you could edit the file after the fact. But it's not really the way people usually do things.
You'll have to look in the docs for the directory where the vms are created for the os you're using. Under Linux they get get put into a hidden directory in your home.

How to recover the deleted files using "rm -R" command in linux server? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have unfortunately deleted some important files and folders using 'rm -R ' command in Linux server.
Is there any way to recover?
since answers are disappointing I would like suggest a way in which I got deleted stuff back.
I use an ide to code and accidently I used rm -rf from terminal to remove complete folder. Thanks to ide I recoved it back by reverting the change from ide's local history.
(my ide is intelliJ but all ide's support history backup)
Short answer: You can't. rm removes files blindly, with no concept of 'trash'.
Some Unix and Linux systems try to limit its destructive ability by aliasing it to rm -i by default, but not all do.
Long answer: Depending on your filesystem, disk activity, and how long ago the deletion occured, you may be able to recover some or all of what you deleted. If you're using an EXT3 or EXT4 formatted drive, you can check out extundelete.
In the future, use rm with caution. Either create a del alias that provides interactivity, or use a file manager.
Not possible with standard unix commands. You might have luck with a file recovery utility. Also, be aware, using rm changes the table of contents to mark those blocks as available to be overwritten, so simply using your computer right now risks those blocks being overwritten permanently. If it's critical data, you should turn off the computer before the file sectors gets overwritten. Good luck!
Some restore utility:
http://www.ubuntugeek.com/recover-deleted-files-with-foremostscalpel-in-ubuntu.html
Forum where this was previously answered:
http://webcache.googleusercontent.com/search?q=cache:m4hiPw-_GekJ:ubuntuforums.org/archive/index.php/t-1134955.html+&cd=1&hl=en&ct=clnk&gl=us

Restore postgresql from files [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have a big problem - I managed to accidentally uninstall the whole PostgreSQL DBMS from my hard drive. I also lost my database and haven't made any dumps of the containing data. I do, however, have a backup of all files from the server. Is it possible to somehow restore the database from these files?
The OS I am using is Debian 6, and the DBMS version is PostgreSQL 8.4.
If it is indeed possible, then how should I go about achieving this?
ps. Sorry for my English.
Make sure your backup is safe. So long as we have that we can start again.
Restore the PostgreSQL server software (check package titles)
apt-get install postgresql-8.4 postgresql-client-8.4 postgresql-contrib-8.4
Stop the server
/etc/init.d/postgresql stop
Restore all your data files. Make sure the ownership is correct:
cd /var/lib/postgresql/8.4/
mv main main.OLD
cp -a /path/to/backup/main .
/etc/init.d/postgresql start
Check the logs (/var/log/postgresql/...) - if your backup occurred while the database was idle you are probably in luck.
Note that you need everything in .../main/ - the database files are in main/base but there are the transaction logs and other assorted bits and pieces needed too.
If you get problems, check your permissions, check your postgresql.conf file (restore that from backup too if you have it, pg_hba.conf etc too). There might be some other packages you need to install too if you were using pl/perl or some such earlier
Now. if you get problems complaining about missing log-files or bad blocks then that means the backup happened while the database was writing to the disk and there may be corruption. However, let's be optimistic and hope for the best.
If it works, check everything looks OK and take a pg_dump of any databases you want straight away.