when I removed the package - Solaris_add-point package ( VIA pkgrm command )
I see from output the lines "shared pathname not removed"
please help me to understand what this mean
and if we can to force pkgrm command to remove the shared pathnames ?
example from pkgrm output
## Removing pathnames in class <none>
/etc/cn/scripts/fmt.ksh
/etc/cn/scripts <shared pathname not removed>
/etc/cn <shared pathname not removed>
/etc <shared pathname not removed>
It means other packages are still installed that use those pathnames and they won't be
removed until all those packages are removed.
This is a good thing. You really really really really do not want the /etc directory removed - that would kill the OS on the machine and make it impossible to boot.
Related
I am using Windows Powershell (on windows 10). I am also using the latest version 2.3.0.5 of docker desktop. When I type "docker version" in powershell the command is not recognized. The error message reads "Der angegebenen Datei ist keine Anwendung zugeordnet." (English: No application is assigned to the specified file). When I instead include the file extension and type "docker.exe version" the command can be executed. The interesting thing is that "docker version" works in a cmd window, but for some reason not in powershell. The extension ".exe" is contained in the windows environment variable PATHEXT.
What could be the reason that it doesn't work in powershell?
PS: I had an old version of docker installed before. There everything worked fine. Then I updated to the newest version. After that I couldn't use my existing docker containers anymore. So I uninstalled the old version and installed version 2.3.0.5. Since then I have this issue.
tl;dr:
Run Get-Command -All docker | ForEach-Object Path
Among the file paths returned, remove those that do not end in *.exe (use Remove-Item).
The likeliest explanation is that, in one of the directories in your system's path ($env:PATH) that comes before the one in which docker.exe is located, contains another file whose base name is docker:
Either: It is an extension-less file literally and fully named docker [This is what it the problem turned out to be] .
PowerShell unexpectedly tries to execute this extension-less file, because it considers it executable, despite - by definition - not having an extension designated as executable via the PATHEXT environment variable ($env:PATHEXT).[1]
This would explain cmd.exe's different behavior, because it sensibly never considers an extension-less file executable.
Presumably, the uninstallation of the old Docker version removed the original docker.exe, but left an extension-less docker file in the same directory behind (possibly a Unix shell script).
Or: It does have an extension (other than *.exe), which:
refers to a file that isn't directly executable and needs an interpreter - a separate executable - in order to be executed
and that extension is listed in the PATHEXT environment variable
and the association between the filename extension (e.g., .py) and the (information about the) associated interpreter is (now) missing, possibly as a result of having uninstalled the older Docker version.
[1] In fact, PowerShell unexpectedly considers any filename extension executable - see GitHub issue #12632.
However, for those extensions not listed in PATHEXT, execution via the path only works if you include the filename extension in a file-name-only call (e.g., executing file.txt opens a file by that name located in the first folder in the path that has such a file in the associated editor). With an extension-less file, there is obviously no extension to include, which is why confusion with an *.exe file of the same base name is possible (unless you invoke with .exe); if both such files reside in the same directory in the path, the *.exe file takes precedence, but if the extension-less file is in a different directory listed earlier in the path, it takes precedence.
I have a program that checks to see if the files in my directory are readable,writeable, and executable.
i have it set up so it looks like
if (-e $file){
print "exists";
}
if (-x $file){
print "executable";
}
and so on
but my issue is when I run it it shows that the text files are executable too. Plain text files with 1 word in them. I feel like there is an error. What did I do wrong. I am a complete perl noob so forgive me.
It is quite possible for a text file to be executable. It might not be particularly useful in many cases, but it's certainly possible.
In Unix (and your Mac is running a Unix-like operating system) the "executable" setting is just a flag that is set in the directory entry for a file. That flag can be set on or off for any file.
There are actually three of these permissions why record if you can read, write or execute a file. You can see these permissions by using the ls -l command in a terminal window (see man ls for more details of what various ls options mean). There are probably ways to view these permissions in the Finder too (perhaps a "properties" menu item or something like that - I don't have a Mac handy to check).
You can change these permissions with the chmod ("change mode") command. See man chmod for details.
For more information about Unix file modes, see this Wikipedia article.
But whether or not a file is executable has nothing at all to do with its contents.
The statement if (-x $file) does not check wether a file is an executable but if your user has execution priveleges on it.
For checking if a file is executable or not, I'm affraid there isn't a magic method for it. You may try to use:
if (-T $file) for checking if the file has an ASCII or UTF-8 enconding.
if (-B $file) for checking if the file is binary.
If this is unsuitable for your case, consider the following:
Assuming you are on a Linux enviroment, note that every file can be executed. The question here is: The execution of e.g.: test.txt, is going to throw a standard error (STDERR)?
Most likely, it will.
If test.txt file contains:
Some text
And you launched it in your Perl script by: system("./test.txt"); This will display a STDERR like:
./test.txt: line 1: Some: command not found
If for some reason you are looking to run all the files of your directory (in a for loop for instance) be warned that this is pretty dangerous, since you will launch all your files and you may not be willing to do so. Specially if the perl script is in the same directory that you are checking (this will lead to undesirable script behaviour).
Hope it helps ;)
I have this wget command:
sudo wget --user-agent='some-agent' --referer=http://some-referrer.html -N -r -nH --cut-dirs=x --timeout=xxx --directory-prefix=/directory/for/downloaded/files -i list-of-files-to-download.txt
-N will check if there is actually a newer file to download.
-r will turn the recursive retrieving on.
-nH will disable the generation of host-prefixed directories.
--cut-dirs=X will avoid the generation of the host's subdirectories.
--timeout=xxx will, well, timeout :)
--directory-prefix will store files in the desired directorty.
This works nice, no problem.
Now, to the issue:
Let's say my files-to-download.txt has these kind of files:
http://website/directory1/picture-same-name.jpg
http://website/directory2/picture-same-name.jpg
http://website/directory3/picture-same-name.jpg
etc...
You can see the problem: on the second download, wget will see we already have a picture-same-name.jpg, so it won't download the second or any of the following ones with the same name. I cannot mirror the directory structure because I need all the downloaded files to be in the same directory. I can't use the -O option because it clashes with --N, and I need that. I've tried to use -nd, but doesn't seem to work for me.
So, ideally, I need to be able to:
a.- wget from a list of url's the way I do now, keeping my parameters.
b.- get all files at the same directory and being able to rename each file.
Does anybody have any solution to this?
Thanks in advance.
I would suggest 2 approaches -
Use the "-nc" or the "--no-clobber" option. From the man page -
-nc
--no-clobber
If a file is downloaded more than once in the same directory, >Wget's behavior depends on a few options, including -nc. In certain >cases, the local file will be
clobbered, or overwritten, upon repeated download. In other >cases it will be preserved.
When running Wget without -N, -nc, -r, or -p, downloading the >same file in the same directory will result in the original copy of file >being preserved and the second copy
being named file.1. If that file is downloaded yet again, the >third copy will be named file.2, and so on. (This is also the behavior >with -nd, even if -r or -p are in
effect.) When -nc is specified, this behavior is suppressed, >and Wget will refuse to download newer copies of file. Therefore, ""no->clobber"" is actually a misnomer in
this mode---it's not clobbering that's prevented (as the >numeric suffixes were already preventing clobbering), but rather the >multiple version saving that's prevented.
When running Wget with -r or -p, but without -N, -nd, or -nc, >re-downloading a file will result in the new copy simply overwriting the >old. Adding -nc will prevent this
behavior, instead causing the original version to be preserved >and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r or -p, the >decision as to whether or not to download a newer copy of a file depends >on the local and remote timestamp and
size of the file. -nc may not be specified at the same time as >-N.
A combination with -O/--output-document is only accepted if the >given output file does not exist.
Note that when -nc is specified, files with the suffixes .html >or .htm will be loaded from the local disk and parsed as if they had been >retrieved from the Web.
As you can see from this man page entry, the behavior might be unpredictable/unexpected. You will need to see if it works for you.
Another approach would be to use a bash script. I am most comfortable using bash on *nix, so forgive the platform dependency. However the logic is sound, and with a bit of modifications, you can get it to work on other platforms/scripts as well.
Sample pseudocode bash script -
for i in `cat list-of-files-to-download.txt`;
do
wget <all your flags except the -i flag> $i -O /path/to/custom/directory/filename ;
done ;
You can modify the script to download each file to a temporary file, parse $i to get the filename from the URL, check if the file exists on the disk, and then take a decision to rename the temp file to the name that you want.
This offers much more control over your downloads.
This should work on my CentOS 6.6 but somehow the file name is not changed. What am I missing here?
rename -f 's/silly//' sillytest.zi
This should rename sillytest.zi to test.zi but the name is not changed. Of course I can use mv command but I want to apply to many files and patterns.
There are two different rename utilities commonly used on GNU/Linux systems.
util-linux version
On Red Hat-based systems (such as CentOS), rename is a compiled executable provided by the util-linux package. It’s a simple program with very simple usage (from the relevant man page):
rename from to file...
rename will rename the specified files by replacing the first occurrence of from in their name by to.
Newer versions also support a useful -v, --verbose option.
NB: If a file already exists whose name coincides with the new name of the file being renamed, then this rename command will silently (without warning) over-write the pre-existing file.
Example
Fix the extension of HTML files so that all .htm files have a four-letter .html suffix:
rename .htm .html *.htm
Example from question
To rename sillytest.zi to test.zi, replace silly with an empty string:
rename silly '' sillytest.zi
Perl version
On Debian-based systems ,rename is a Perl script which is much more capable
as you get the benefit of Perl’s rich set of regular expressions.
Its usage is (from its man page):
rename [ -v ] [ -n ] [ -f ] perlexpr [ files ]
rename renames the filenames supplied according to the rule specified as the first argument.
This rename command also includes a -v, --verbose option. Equally useful is its -n, --no-act which can be used as a dry-run to see which files would be renamed. Also, it won’t over-write pre-existing files unless the -f, --force option is used.
Example
Fix the extension of HTML files:
rename s/\.htm$/.html/ *.htm
I'm working on an RPM package that deploys files to /opt and /etc.
In most of the cases it works perfectly, excepted that for a given environment, writing to /etc is not allowed ....
So I used Relocations in order to deploy the /etc files in some other location :
Relocations : /opt /etc
By specifying --relocate option I can deploy the /etc files into another location :
rpm -ivh --relocate /etc=/my/path/to/etc mypackage.rpm
Now the issue is that in the postinstall script, there are some hard coded references to /etc that don't get replaced when the package is deployed :
echo `hostname --fqdn` > /etc/myapp/host.conf
I hope that there is a way (macro, keyword, ... ) to use instead of hard coded paths in order to perform the substitutions during rpm execution.
If you have any information on this I'd really appreciate some help.
Thanks per advance
PS : Please note that this is NOT a duplicate of the previously asked (and answered) questions related to the root path re-locations as we're dealing with several relocation paths and the fact that we need to handle each of them separately during rpm scriptlets
Many thanks to Panu Matilainen from the RPM mailing list who answered the question. I'll re-produce his mail literally in order to share the knowledge :
I assume you mean (the above is how rpm -qi shows it though):
Prefixes: /opt /etc
The prefixes are passed to scriptlets via $RPM_INSTALL_PREFIX<n>
environment variables, <n> is the index of supported prefixes starting
from zero. So in the above,
/opt is $RPM_INSTALL_PREFIX0
/etc is $RPM_INSTALL_PREFIX1
So the scriptlet example becomes:
echo `hostname --fqdn` > $RPM_INSTALL_PREFIX1/myapp/host.conf
Works like a charm, thank you very much Panu !