Broke my terminal with a bad PATH [duplicate] - command-line

This question already has an answer here:
Wrong path set and now .bashrc throws errors [duplicate]
(1 answer)
Closed 6 years ago.
I put an invalid path into my bashrc and now my terminal doesn't work. I've deleted my bash_profile as well as my bashrc and I still can't get it to work.
Every command I use in terminal gives me the error:
sh: parse_git_branch: command not found
This was something I had in my bash_profile for git autocompletion. The problem is once I've deleted the files using /bin/rm <file>, I can't apply my changes with source ~/.bash_profile because it's giving me errors. I've looked at this question, and this question, but I still am having no luck.
How can I fix my command line after breaking things like this?

You can export a correct PATH in your current session using a command similar to :
export PATH=/usr/local/bin:/usr/bin:/bin
After using this, you should be able to source your bashrc.

You can always change your path right on the command line for the rest of the current shell session: PATH="/usr/local/bin:/usr/sbin:/usr/bin:/bin:/sbin:"
Exporting the PATH won't solve your problem. It will only export the value of PATH to child shells/processes. This only achieves the same thing as resetting/changing the PATH variable from the command-line (as shown above), because your scope is limited to the current shell process (and its children) while you are working within that shell environment (working from the command-line). If you open a new terminal (start a new login shell, which is a parent process), bash will still be trying to read from the resource files (.bash_profile and .bashrc), which no longer exist.
So, we need to be able to widen our scope and tell bash that we want to make changes for all future bashes (shell processes--new shells, shells within shells, etc.), and the way that is done is by writing those statements in the resource files.
Once you set your PATH there, every shell session will spawn with those resources (all the aliases, commands, variables, and functions stored in those files), loaded, regardless of whether you export it or not, since each shell process will always consult those files before it does anything else (so they effectively pre-emptively import what you are trying to export to them).
bash is like any other script or program. You took away all its inputs and pointers and shortcuts, and now its just been made pretty dumb, so you've got to be explicit with everything you do till you can teach it where to look for stuff (give it a PATH to search) and shorten your conversations with it.
If you don't have a backup of your .bash_profile, here's one that you can use:
# This loads in the configuration in .bashrc
# Put all configuration in there!
if [ -f ~/.bashrc ]; then
source ~/.bashrc
fi
And here's a .bashrc that can get you started:
export PATH="/bin:/sbin:/usr/local/bin:/usr/bin:/usr/sbin:"

Related

Editing WSL2 instance of Ubuntu Crontab using Windows VSCode

My question is whether it is possible to edit the crontab of a WSL2-based instance of Ubuntu with my Windows VSCode that is connected via WSL remote SSH.
If I type export EDITOR=code inside my WSL instance and then crontab -e, I am able to see a /tmp/crontab.sygFAU file load inside my VSCode instance.
The problem is that once I make edits to this file, it will save the file to /tmp/crontab.sysFAU but it doesn't actually take the next step of replacing the the real crontab file in /var/spool/cron/crontabs.
So once I re-open the crontab, it will just show what I had previously, and not my saved edits.
It would be nice to know if this is not possible or if there are any alternative ways to run a GUI editor because using nano is a pain!
An interesting question that I haven't considered before, myself. Here's what's happening:
You set your editor to code
You crontab -e, which properly loads VSCode with the temporary crontab.
However, because this is a Windows GUI application, it returns control to the parent Linux application (crontab) immediately after starting VSCode. You can see the same result if you just start notepad.exe from your shell in WSL -- Once Notepad starts (rather than exits) control is returned to the shell.
If you switch back to your terminal at this point, you'll see that crontab detected that the editor that it launched exited (returned), and so it has already tried to copy the temporary file to the permanent location.
However, since the temporary files doesn't yet have any changes, crontab decides there's nothing to do.
Editing the file in VSCode and saving it has no effect, other than to leave a dangling /tmp/... file hanging around (since crontab isn't there to clean up).
So what's the solution? We need a way to launch a Windows GUI application and prevent it from returning control to crontab until you are done editing.
I originally thought something from this question might work, but the problem is that the actual command that launches the Windows process is embedded in a shell script, which you can see with less "$(which code)" (or code "$(which code)"), but it's probably not a great idea to edit this.
So the next-best thing I came up with is a simple "wrapper" script around the (already-a-wrapper) code command. Create ~/.local/bin/code_no_fork.sh (could be anywhere) with:
#!/usr/bin/env bash
code $* > /dev/null
echo Press Spacebar to continue
read -r -s -d ' '
Credit: This answer for the Spacebar approach
Then:
EDITOR=~/.local/bin/code_no_fork crontab -e
After you make your edits in VSCode, simply press Space to allow the script to continue/exit, at which point crontab will (assuming no errors were detected) install the new Crontab.
Alternatives
This is should typically only be a problem with Windows GUI applications, so the other possible avenue is to simply use any Linux editor that doesn't fork. If you want a GUI editor, that's entirely possible as long as you are running a WSL release that includes WSLg support (now available for Windows 10 and 11).
I won't offer any individual editor suggestions since that would get into "opinion" and "software recommendation" territory, which is off-topic here.

fish shell functions not accessing user paths

I've recently started using fish, and I needed to use a jar file for google's bundletool.
As such, I needed to set up an alias/function for bundletool, and I chose a function since it seems more "fishy".
My function is simple:
function bundletool
java -jar bundletool-all-1.12.1.jar $argv
end
The bundletool jar itself lives at ~/.local/bin, which is on my fish user path:
lase#laser-razer /m/c/U/matth [1]> echo $fish_user_paths
/home/lase/.local/bin /usr/local/go/bin /home/lase/.nvm /home/lase/.cargo/bin /home/lase/.cargo
In a regular shell, I can execute java -jar bundletool-all-1.12.1.jar, and the command runs as expected. However, in the function, fish doesn't seem to know about my fish_user_paths, and reports it cannot find the jar file.
To remedy this, I had to update my function to specify the full path:
function bundletool
java -jar ~/.local/bin/bundletool-all-1.12.1.jar $argv
end
This works, but I feel like I'm doing something incorrectly. Should I be setting up my paths or functions in a different fashion?
Your function will run in the ordinary environment.
It will run with the current $PATH [0] and, more importantly to you, the current working directory.
What happens is this:
java -jar bundletool-all-1.12.1.jar
Will tell java to run the jar found at bundletool-all-1.12.1.jar.
Notably, fish just hands java the string "bundletool-all-1.12.1.jar", and java will then not look at $PATH. It does not care about $PATH. It simply looks at "bundletool-all-1.12.1.jar", and tries to open a file by that name.
And it will find that file, if it is in the current directory.
And that's the reason this worked for you when you executed it interactively - because you happened to be in that directory
And then you tried it with the function, but you tried it from a different directory, and so it didn't work.
This works, but I feel like I'm doing something incorrectly. Should I be setting up my paths or functions in a different fashion?
No, giving the full path to the file instead of relying on the working directory is the right thing to do.
[0]: $fish_user_paths is just a variable you set, that fish will then take care to add to $PATH. $PATH is the actual variable that fish and other tools (including any command fish starts, if it wants to) will use to find commands.

Weird CMD - VSCode behavior

yesterday I had SumatraPDF and VisualStudioCode with latex-workshop working with forward and reverse-search. Today the reverse-search didn't work any more. With a simple bat file I tried to show the commandline arguments.
echo %*
pause
They seemed correct and when I copied the command and paste it into a new cmd it works. To do further testing I tried to direct command and run it.
"C:\....\Code.exe" -g "%1:%2"
pause
Visual Studio Code responds: bad option -g
In SumatraPDF I set cmd as command.
Now the behavior is that I have two cmd windows. With the cmd directly opened the VSC open's the file. With the cmd launched indirect I got the error message.
I have tried resetting the environment variables, changing the current working directory, and checking the code page currently in use.
How can it be that cmd behaves differently with seemingly the same environment? And what can I do to make a cmd started from an application work like a cmd started by windows?
Update: It appears there were a couple of recent security changes in the way VSCODE.exe is allowed to interact with the command line (especially affecting LaTeX-workshop users) so for recent changes twice this year see the discussion at https://forum.sumatrapdfreader.org/t/inverse-search-not-performed-for-vs-code-exe/4486/27
Within SumatraPDF the reverse syntex command for %1 is replaced by %f for file and (l)L for line
It is triggered by a double click near the line of interest and if the synctex index file was compiled correctly by PdfLaTeX (or similar) it will include the tex %f(ilename) and the nearest %l(ine) reference to the point where double clicked.
Thus your tex syctex enhanced "reverse search" call out of SumatraPDF should historically be
"C:\...path to...\Code.exe" -g "%f:%l"
that's Lower L not 1
Avoid using any depreciated -inverse-search parameter from a LaTeX editor just add it once into SumatraPDF-settings.txt and then it's not disturbed by repeated assignments when running your -forward-search.
It will NOT work if the file.synctex or file.synctex.gz is corrupt by a bad PDF compilation.
HOWEVER It seem Microsoft have added the requirement to add a CLI.js handler and requires another switch setting after that ! (see link to discussion in Update above)
For a small test file download https://github.com/GitHubRulesOK/MyNotes/raw/master/AppNotes/SumatraPDF/LATeX%20and%20Reverse-Search.zip unpack and open sync.pdf in SumatraPDF to test that double click on page opens sync.tex in the editor
If the message is cannot start ... then the command line is not configured correctly. A rough test for a bad synctex is to see what happens if the call is changed by adding cmd /k echo to the start, since that will confirm the reverse command. Here I wrote "wrong" as the path to code.exe, once corrected I can remove cmd /k echo.
For some other systems where the reverse might change
see https://github.com/sumatrapdfreader/sumatrapdf/issues/1197#
However there should be no interference in a valid VsCode call.

Files: bash_profile zhrc confusion

I am not sure this is clear to me and if is neat for my system. I'm aware of the ~/.zhrc file where I can store alias and paths, but today after installing node via brew I was asked to put this: export PATH="$HOME/.npm-packages/bin:$PATH" in my ~/.bash_profile file, which it doesn't exist, thus in my effort to keep my system clean I putted it in the former file but emacs complaint. Now, I removed it and putted it, after creating, in the ~/.bash_profile. Is that OK to keep both in the home directory?
You need to provide the exact wording of whatever error or warning message you
get from emacs to ensure accurate or better answers. However, I will make a
guess and assume the warning you are getting is from the exec-path package.
This package has a check, which you can disable, that looks to make sure you
have variables defined in the correct init file.
In general, most shells support two types of configuration files
Startup or Login init files
Interactive shell init files
The difference is how often or when the files are sourced (loaded). To
understand the difference, you really need to understand when a shell is run and
the relationship between each shell. I'll try to give a vary high level
explanation, but you really should read the manual page for the particular shell
you are using.
Think of your environment as a tree of shell processes. When you login to the
system, a login shell is created. This shell will be the parent of all the other
shells you create. Each time you run a command, it is executed in a new shell
(this isn't 100% accurate, but is accurate enough to explain the main
points). So when you open a terminal, it runs another shell which is a child of
your login shell. When you execute various commands, the system creates a new
shell and runs that command inside the shell. These are all children of your
parent login shell. Some shells only exist for a short period of time (as long
as it takes to execute the command), others may last for hours, days or possibly
weeks (such as the shell that emacs is running in).
The important point to keep in mind is that child shells inherit various
settings from the parent shell. The idea of the 'export' command you will see in
front of some variables is actually a command to the shell telling it to export
the variable to child shells. For example, if we have a line like
export PATH=/usr/local/bin:/usr/bin:/bin
what we are really doing is
PATH=/usr/local/bin:/usr/bin:/bin # set the variable
export PATH # make it available in child shells
We don't always want variables to be exported as some variables need to be reset
in the child shell itself. For example, the variable holding the prompt string.
It would not work to have this variable only defined in the login parent shell
if you want the prompt to have dynamic components, such as the current
directory, date or time. We want these types of variables to be defined in each
shell when it is created.
To handle this, shells have the two different init files. The login init files
are only sourced for the parent shell and are particularly useful for setting
variables that will be common to all child shells. the per-shell init files are
sourced for every new shell and are best used for setting things which need to
be updated or changed each time a shell is started. There are also other shell
configuration files which can be used for other special purposes, such as when
you log out or log off a system, or to just put alias definitions in etc.
Once upon a time, it made a big difference where you put your variables as there
was a performance hit when sourcing these init files. If the per-shell init file
was too large and consumed too many resources, the whole performance of your
environment could be affected. This is largely less of an issue these days due
to increased processing speeds. Unfortunately, because many people didn't
understand the role and relationships between the different shell configuration
files, there is lots of incorrect or misleading information out there regarding
where values should be set. People often advise setting variables in (for
example) bashrc when they should be set in the bash__profile=. The confusion is
partly caused by the fact you can add a variable to bashrc and it will work when
you test it (usually because your test involves forking a new child shell) and
putting it in your bash_profile will only work after the next login.
There are also some platform differences which make things a little less
clear. For example, under OSX, there is actually a special file in the /etc
directory where you should add additional path components (I'm not on a mac just
now, but it is something like /etc/paths or a per path component file in
/etc/path.d). This is done so that you have a global place to set paths which
will ensure desktop processes, such as the dock, which do not run as a child
process of your login shell, are able to be set.
As a general rule, most variables can go in the login profile, with the
exception of variables relating to the prompt or other variables which have a
dynamic content i.e. content which changes depending on time, directory
location or other tracking of interactive actions which are specific to a shell
instance.
Setting of the path (noting OS differences as described above) should go into
the profile or login configuration file. Under bash, this is .bash_profile and
under zsh, it is typically .zprofile. As bash has become the most common shell,
documentation etc often advises adding things to .bash_profile. If your running
zsh, then add the same information using .zprofile.
As you have said you don't have a.bash_profile, but you do have a zshrc file, I
am assuming you are running zsh rather than bash as your login shell. This being
the case, you need to add that path setting to .zprofile in your home
directory. The exec-path package is complaining because you added it to
zshrc/bashrc, which are not the correct place to set path variables. If your
running under OSX, you really need to add the path to the correct file in /etc
(you will need to check the OSX documentation as I cannot remember the precise
filename).

Source configuration filr in terminal though perl script

I need to source configuration file 'eg.conf' to terminal though perl script. I am using system command but its not working.
system('. /etc/eg.conf')
Basically I am writing script in which later point it will use the environment variable (under conf file) for execute other process.
It is not clear what you are trying to achieve, but if you want to make the config available from within Perl AND your config file is valid Perl code you can use do or require (see perldoc for more information).
What you are doing in your code is to spawn a shell with system, include the config inside this shell (which must be in shell syntax) and then exit the shell again which of course throws all the config away on close. I guess this is not what you intend to do, but your real intention is not clear.
What is your goal? Do you need to source eg.conf to set up further calculations from within a perl controlled shell, or are you trying to affect the parent shell that is running the perl script?
Your example call to system('. /etc/eg.conf') creates a new shell subprocess. /etc/eg.conf is sourced into that shell at which point the shell exits. Nothing is changed within the perl script nor in the parent process that spawned the perl script.
One can not modify the environment of a parent process from a child process, without the assistance of the parent process[1]. One generally returns code for the parent shell to source or to eval.
1: ok, one could theoretically affect the parent process by directly poking into its memory space. Don't do that.