How is a program in a specific location? - operating-system

When writing a program, the program is sort of running in a particular directory which is the current working directory.
I'm trying to understand more about the idea of a cwd. How does a program know what its cwd is? Where is that information stored?
I know perfectly well how to use the os module in python, but I don't really understand what it means to have a cwd. Is it simply a data attribute, "this is where we are", that we can change arbitrarily? And we simply look for things and create things on that particular section of the HD? Or is some sort of pathway actually opening and closing actively when we change cwd, like a door getting shut and another being opened?
What happens on the computer when I change cwd in a program?
This may be language-agnostic, I am unsure.

The current working directory is (at least on most operating systems) an attribute of a process, so yes, it is more or less a simple attribute stating "this is where we are". As it's an attribute of a process, it is stored and managed by the OS kernel.
It can be changed arbitrarily by calling e.g. os.chdir in python, and a shell would similarly change its working directory each time you run the builtin cd command. And they both would normally call the same API of the operating system, e.g. chdir(). Changing the cwd is subject to filesystem permissions, so you can only change the working directory to a path that actually exists and you have permissions to.
The cwd may also be involved in file operations, as when a process opens a file path that is not an absolute path, the file name will be resolved relative to the cwd of the process.
On unix systems the cwd is inherited from the parent process, as such the cwd of a process you start from a shell will have its cwd to the directory you are in when you start the process (and not e.g. the directory of the executable you start).

Related

How to cd in terminal using perl script by providing input from terminal? [duplicate]

This question already has answers here:
Change the current directory from a Bash script
(17 answers)
Closed 2 years ago.
I am newbie in perl, how can I change the current working directory in terminal also using perl.
I want to provide input from terminal, so it will take respective dir.
use Cwd;
my $in1 = shift;
my $in2 ="/usr/swarak/scripts/dir";
my $str = $in2.$in1;
print "$str\n";
chdir "$str";
print(cwd);
this is changing the present script executing dir, I want to change the Linux terminal also.
OUTPUT: if I give 1, it should take me /usr/swarak/scripts/dir1, if I give 99, /usr/swarak/scripts/dir99.
I have lost of dir, so want this simple shortcut.
Each process on a Linux server runs in an environment. The current working directory is one of the attributes of that environment.
When a new process starts (and your Perl program runs as a new process), it inherits the current environment from its parent process. The new process can then make whatever changes it wants to its new environment. It cannot, however, change its parent's environment. When a process ends, its environment is destroyed.
So, what happens here:
You have a process (your shell window, for example) which has its environment. The current working directory is... well, let's say your home directory.
You run your Perl program. That inherits a copy of the existing environment, with the current working directory set to your home directory.
You program changes directory to your new directory. This is in the subprocess environment. It does not affect the parent environment.
Your program ends and exits. The subprocess environment is destroyed.
You are back in your shell window environment. This environment hasn't changed. Therefore, your current working directory hasn't changed.
This is something that all Unix/Linux users have to find out for themselves at some point. You cannot write a program that changes the current working directory of its parent process.

Files: bash_profile zhrc confusion

I am not sure this is clear to me and if is neat for my system. I'm aware of the ~/.zhrc file where I can store alias and paths, but today after installing node via brew I was asked to put this: export PATH="$HOME/.npm-packages/bin:$PATH" in my ~/.bash_profile file, which it doesn't exist, thus in my effort to keep my system clean I putted it in the former file but emacs complaint. Now, I removed it and putted it, after creating, in the ~/.bash_profile. Is that OK to keep both in the home directory?
You need to provide the exact wording of whatever error or warning message you
get from emacs to ensure accurate or better answers. However, I will make a
guess and assume the warning you are getting is from the exec-path package.
This package has a check, which you can disable, that looks to make sure you
have variables defined in the correct init file.
In general, most shells support two types of configuration files
Startup or Login init files
Interactive shell init files
The difference is how often or when the files are sourced (loaded). To
understand the difference, you really need to understand when a shell is run and
the relationship between each shell. I'll try to give a vary high level
explanation, but you really should read the manual page for the particular shell
you are using.
Think of your environment as a tree of shell processes. When you login to the
system, a login shell is created. This shell will be the parent of all the other
shells you create. Each time you run a command, it is executed in a new shell
(this isn't 100% accurate, but is accurate enough to explain the main
points). So when you open a terminal, it runs another shell which is a child of
your login shell. When you execute various commands, the system creates a new
shell and runs that command inside the shell. These are all children of your
parent login shell. Some shells only exist for a short period of time (as long
as it takes to execute the command), others may last for hours, days or possibly
weeks (such as the shell that emacs is running in).
The important point to keep in mind is that child shells inherit various
settings from the parent shell. The idea of the 'export' command you will see in
front of some variables is actually a command to the shell telling it to export
the variable to child shells. For example, if we have a line like
export PATH=/usr/local/bin:/usr/bin:/bin
what we are really doing is
PATH=/usr/local/bin:/usr/bin:/bin # set the variable
export PATH # make it available in child shells
We don't always want variables to be exported as some variables need to be reset
in the child shell itself. For example, the variable holding the prompt string.
It would not work to have this variable only defined in the login parent shell
if you want the prompt to have dynamic components, such as the current
directory, date or time. We want these types of variables to be defined in each
shell when it is created.
To handle this, shells have the two different init files. The login init files
are only sourced for the parent shell and are particularly useful for setting
variables that will be common to all child shells. the per-shell init files are
sourced for every new shell and are best used for setting things which need to
be updated or changed each time a shell is started. There are also other shell
configuration files which can be used for other special purposes, such as when
you log out or log off a system, or to just put alias definitions in etc.
Once upon a time, it made a big difference where you put your variables as there
was a performance hit when sourcing these init files. If the per-shell init file
was too large and consumed too many resources, the whole performance of your
environment could be affected. This is largely less of an issue these days due
to increased processing speeds. Unfortunately, because many people didn't
understand the role and relationships between the different shell configuration
files, there is lots of incorrect or misleading information out there regarding
where values should be set. People often advise setting variables in (for
example) bashrc when they should be set in the bash__profile=. The confusion is
partly caused by the fact you can add a variable to bashrc and it will work when
you test it (usually because your test involves forking a new child shell) and
putting it in your bash_profile will only work after the next login.
There are also some platform differences which make things a little less
clear. For example, under OSX, there is actually a special file in the /etc
directory where you should add additional path components (I'm not on a mac just
now, but it is something like /etc/paths or a per path component file in
/etc/path.d). This is done so that you have a global place to set paths which
will ensure desktop processes, such as the dock, which do not run as a child
process of your login shell, are able to be set.
As a general rule, most variables can go in the login profile, with the
exception of variables relating to the prompt or other variables which have a
dynamic content i.e. content which changes depending on time, directory
location or other tracking of interactive actions which are specific to a shell
instance.
Setting of the path (noting OS differences as described above) should go into
the profile or login configuration file. Under bash, this is .bash_profile and
under zsh, it is typically .zprofile. As bash has become the most common shell,
documentation etc often advises adding things to .bash_profile. If your running
zsh, then add the same information using .zprofile.
As you have said you don't have a.bash_profile, but you do have a zshrc file, I
am assuming you are running zsh rather than bash as your login shell. This being
the case, you need to add that path setting to .zprofile in your home
directory. The exec-path package is complaining because you added it to
zshrc/bashrc, which are not the correct place to set path variables. If your
running under OSX, you really need to add the path to the correct file in /etc
(you will need to check the OSX documentation as I cannot remember the precise
filename).

get user machines current working directory from perl cgi

i am trying to get the current working directory path using Perl
when i execute from ubuntu: $root#ubuntu:/var/test/geek# firefox http:/localhost/test.html, i get /var/cgi-bin as output in perl cgi page instead of /var/test/geek.
used perl code:
my $pwd=cwd();
bla bla
print "<h1> pwd </h1>";
above code gives path of test.pl not users working directory path
Edit: When i run the script alone from the terminal it works fine. for example:
$root#ubuntu:/var/test/geek# /var/cgi-bin/test.pl
i get /var/test/geek. but when i call the script in html page using submit button it gives path of perl script.
Each process has its own working directory that it inherits from its parent when it gets created.
cwd() returns the current process's working directory.
For a CGI script, the browser doesn't pass its working directory to the server as part of the request. To obtain that, you need to have code running on the client system that submits it. That might be an application that the user download, or possibly, but unlikely, some in-browser code, like Javascript / a Java applet (This info is likely hidden from in-browser code for security reasons though).
(The rest assumes Linux, it will likely differ on other operating systems)
The part below assumes that you are looking for the working directory of a user on the server:
In order to get a specific shell for a specific user's working directory, you would need to identify the PID for the shell and get the working directory from the /proc/<pid>/cwd symlink (To read these, the process must belong to the user running the code, or the code must run as root (Which is a bad idea for a CGI script)...). To get the PID of the shell, you likely need to start from the w command output, or its data source, /var/run/utmp. Sys::Utmp might be useful for this... You might then also need to retreive a whole lot of extra info to find all the processes that might have the working directory that you are looking for.
I think you are mixing the web server and the local user. The web server has a working directory when you run the script, and that is the one that cwd() returns.

How to create a file in the root dir with perl?

I have an error with perl while trying to CREATE a file called .envfile in the root dir / (only for UNIX). Permission denied, which is understood. But, is there a way to write this file? I need to do it without any modules, just with a built-in functions. I expect for using chmod, but... honestly, have no idea of how to implement it in the same thread SAFELY.
I need this file to write in it my own ENVs for my software (as it is a big project with many dirs and needs to operate with many own ENVs).
Trying simple:
my $filename = '.envfile';
open FH, '>', $filename or die $!;
print FH "some data\n";
close(FH);
Apache says: Permission denied at /var/www/cgi-bin/env.cgi line 41.
Any help appreciated!
Thanks!
If I understand the question correctly, it appears that you also control the software which will ultimately read the file you're trying to create. Is that accurate? If so, change the program to get its environment from somewhere else. Where else? Preferably a new directory, so that you can make it writable by your web server without affecting anything else. I'd probably use /etc/myprogram (because /etc is the standard place for configuration files) or /var/local/myprogram (because /var is the standard place for persistent data files). But not an existing directory which is and should remain writable solely by root.
Short of exploiting a security flaw, Perl does not allow you to sidestep filesystem security (permissions). And that is a Good Thing. If it were allowed, it would mean that anyone who finds an exploit in your Perl code could then change any file on your computer, potentially replacing it with the most malicious code ever written.
Thus, the only way that your Perl can create a file in / is if it runs as root or uses su/suid to run some other program as root. And you really, really, really do not want CGI scripts or web applications running as root because, unless you do everything absolutely perfectly in your code, and there are no exploitable bugs in perl itself, or apache, or the kernel, then, by running your web code as root, you're potentially handing root access to any random script kiddie on the internet.
If you really, truly, absolutely have no choice other than to have web-accessible code write arbitrary files to /, then the least-bad, least-insecure way to do it would be to create a very tiny helper program which takes a file name and file contents as inputs, checks to verify that the named file does not already exist (so that an attacker can't use it to overwrite, say, your kernel), and then creates the named file with the provided contents. Aside from maybe a little additional sanity/security checking, it should do absolutely nothing else because the more complex this helper program is, the more likely it is to contain exploitable flaws. Then have the web code use suid to run the helper program, with suid configured to allow the web user (and only the web user) to run the helper program (and only the helper program) with no password.
But don't do that unless you really, truly, absolutely have no other option. It is not the best way to do it, it is the least bad way. Which means it's still a bad idea.
Create the file 'by hand' and set it's owner to the owner of the apache process, e.g.:
sudo touch /.envfile
sudo chown www-data:www-data /.envfile
sudo chmod u+rw /.envfile
You're executing your Perl program as a user without sufficient privilege. Run the Perl program using a user with sufficient privilege (e.g. using sudo or su).

Perl Win32 - Geting file handle provided file and removing it

I have a file (ex. C:\temp\afile.txt) in which a Windows service has an open file handle on. After stopping the process, the file handle remains open. I would like to be able to find and delete this handle simply provided the file name and path with a Perl script. Is this possible? Thank you for your time.
It is possible to locate which process hold a file handle open and to reach into the process and kill the handle because MS's Process Explorer can do just this. How? I don't know.
You should probably use MoveFileEx(file_name, NULL, MOVEFILE_DELAY_UNTIL_REBOOT) instead. This causes the file to be deleted the next time the system is rebooted.
Win32API::File provides a Perl interface to that system call.