Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How do I access specific sections of man pages?
Put the section number in front of the item you want to reference. For example, to access the sysctl function from the library section, you can write:
man 3 sysctl
and to access the sysctl tool from the system administrator's section, you would write:
man 8 sysctl
To add to what Jason said: if you're not sure what section something is in, you can also see all of the man pages for a given topic by typing
man -a topic
This is useful for topics such as printf, for which there is both a command and a C function that do similar things.
use the -s flag, for example:
man -s 2 read
you might like to look at
man intro
to get an idea of what's where.
HTH.
cheers,
Rob
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
In the documentation it says:
-X list
But what does it actually mean, when i call.
wget -X GET https://www.google.com
Can anybody explain please?
From the man page:
-X list
--exclude-directories=list
Specify a comma-separated list of directories you wish to exclude from download. Elements of list may contain wildcards.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Used to check if the index (indices) exists or not. For example:
curl -XHEAD -i 'http://localhost:9200/twitter'
The HTTP status code indicates if the index exists or not. A 404 means it does not exist, and 200 means it does.
What is the use of -i option in above example?
This is related to cURL, I suppose. So it means it should be written in documentation:
Different protocols provide different ways of getting detailed
information about specific files/documents. To get curl to show
detailed information about a single file, you should use -I/--head
option. It displays all available info on a single file for HTTP and
FTP. The HTTP information is a lot more extensive.
Or alternatively in here:
-i, --include
(HTTP) Include the HTTP-header in the output. The HTTP-header includes
things like server-name, date of the document, HTTP-version and
more...
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm wondering if, using Perl's eBay::API, there is a way to pass the title of a potential listing and return a category recommendation?
For example, if I entered "Lapierre Zesty 914 2013" it would return something like
"Sporting Goods > Cycling > Bikes" or perhaps a set of possibilities?
I started looking at http://search.cpan.org/~tkeefer/eBay-API-0.25/, but there are so many modules, I hoped someone could point me at the right one...
I'm searching eBay.co.uk. If eBay::API doens't do it, but something else does (in Perl), please do say.
On http://search.cpan.org/~tkeefer/eBay-API-0.25/, find the "Other tools" link. There's a search function so you can find things in all of the distribution files. Looking for keywords such as "suggested" often lead you in the right direction. :)
Of course, two minutes after asking, I found it: GetSuggestedCategories does the trick.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to write a script that writes (appends) data to an internal wiki that isn't public (needs username and password but unencrypted, http not https). The script can be a shell script, a Perl script, or even a Java application (last resort). Any help would be appreciated. Let me know if any additional information is needed.
Right now, I'm only able to read from the wiki using LWP Perl library using the getprint($url) function.
Thanks
If it's truly MediaWiki, then just use MediaWiki::API.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 14 years ago.
Improve this question
I'm attempting to use wget to recursively grab only the .jpg files from a particular website, with a view to creating an amusing screensaver for myself. Not such a lofty goal really.
The problem is that the pictures are hosted elsewhere (mfrost.typepad.com), not on the main domain of the website (www.cuteoverload.com).
I have tried using "-D" to specified the allowed domains, but sadly no cute jpgs have been forthcoming. How could I alter the line below to make this work?
wget -r -l2 -np -w1 -D www.cuteoverload.com,mfrost.typepad.com -A.jpg -R.html.php.gif www.cuteoverload.com/
Thanks.
An examination of wget's man page[1] says this about -D:
Set domains to be followed. domain-list is a comma-separated list of domains. Note that it does not turn on -H.
This advisory about -H looks interesting:
Enable spanning across hosts when doing recursive retrieving.
So you need merely to add the -H flag to your invocation.
(Having done this, looks like all the images are restricted to mfrost.typepad.com/cute_overload/images/2008/12/07 and mfrost.typepad.com/cute_overload/images/2008/12/08).
--
[1] Although wget's primary reference manual is in info format.