I have created an new index with "file crawler" as template. I can see that it has crawled the files from the local folder. But how do i use this search? Is there any feature that can search and list one of my files?
You probably should have a look at this tutorial. They describe the main concepts available in OpenSearchServer. The chapter you will be interested in is search-page-rendering.
Related
I do apologize if similar question already put, but I haven't found one.
I would like to change the default project description file name (called README.md) to be a custom name (let's say XXX.md) and I wonder if it is possible to be the XXX.md the initial readme file for the project (typical situation: you open the project Code page and you will see the content of XXX.md down at bottom).
I would like to verify if this approach possible in general, but I am mainly interested in Github and Bitbucket services. I briefly checked project settings and I cannot find there such customization. Is it even possible?
GitHub, at least, doesn't provide the ability to do this. It is possible to use a different format (e.g., README.asciidoc or README.rst), but the root file must be called README.
Note that you can include other text markup documents like this and they'll be rendered if they're visited, it's just that they won't appear at the bottom of the file listing like a README will.
I'm planning to try using dokuwiki to manage my large collection of notes, and one of the major attractions is its flat file basis that'll allow me to edit via scripts etc. I had a question - suppose a page's material fits into multiple namespaces. If I were to create the file in one namespace and then create symlinks in the other namespace directories, would that work? Or would that screw up revisions etc?
Yes, you can do that. But yes, this will mess with your revisions a bit:
when DokuWiki saves a page, it copies the data of the old page to the attic
the name of the attic file is the same as the page that was edited, but with a timestamp appended
because new attic files are created you can't work with symlinks in the attic
Imagine you have the following setup:
data/pages/original.txt
data/pages/copy.txt -> original.txt
You now can edit the pages original and copy in your wiki and they will both always be the same. However old revisions of the pages will be split between the two, depending on which page you edited.
Instead of messing with file level consider
Include plugin to share content between pages.
Creation of some 'commons' namespace for such pages to be DRY.
Namespace templates (+ additional plugin).
Pulling content from page side instead of pushing it to pages. This might be good to start with. You can always include some php code or even write your own plugin.
I want to store my documentation under SVN source control.
In DokuWiki settings there is
Directory for saving data '.../apps/dokuwiki/data'
DokuWiki stores all data inside text files under '.../apps/dokuwiki/data' folder. There are many stuffs there including indexes caches etc. It seems that I only need the 'pages' folder.
How can I move the 'pages' folder inside my SVN folders and configure the DocuWiki to use pages from there?
$conf['datadir'] can be used in conf/local.php to set the page directory independently from the rest of the directories in data. You probably want to use $conf['mediadir] for uploaded images and files as well and maybe $conf['metadir'] for saving page metadata.
Here's an example of what I set mine to:
$conf['datadir'] = './my-data/pages/';
$conf['mediadir'] = './my-data/media/';
$conf['metadir'] = './my-data/meta/';
N.B. Be sure to use 'datadir' (not 'pagedir') as noted in the comments to the earlier answer.
You may also want to configure your attic:
$conf['olddir'] = './my-data/attic/';
This makes management under svn more complicated, as you have to add attic files all the time, but it preserves change history across developers. This depends a little more on your installation though; if you regularly clean your attic you wouldn't want to do this.
Can you point me on an idea of how to get all the HTML files in a subfolder and all the folders in it of a website?
For example:
www.K.com/goo
I want all the HTML files that are in: www.K.com/goo/1.html, ......n.html
Also, if there are subfolders so I want to get also them: www.K.com/goo/foo/1.html...n.html
Assuming you don't have access to the server's filesystem, then unless each directory has an index of the files it contains, you can't be guaranteed to achieve this.
The normal way would be to use a web crawler, and hope that all the files you want are linked to from pages you find.
Look at lwp-mirror and follow its lead.
I would suggest using the wget program to download the website rather than perl, it's not that well suited to the problem.
There are also a number of useful modules on CPAN which will be named things like "Spider" or "Crawler". But ishnid is right. They will only find files which are linked from somewhere on the site. They won't find every file that's on the file system.
You can also use curl to get all the files from a website folder.
Look at this man page and go to the section -o/--output which gives u a good idead about that.
I have used this a couple of times.
Read perldoc File::Find, then use File::Find.
The view file macro allows embedding documents (.ppt, .pdf, etc) on a Confluence wiki page. Limitation is, documents must be on attachments.
So question, is there a way to load dynamically a file located into an SCM's deposit?
P.S. Current SCM: Perforce.
UPDATE: As I see, there is no official Perforce plugin.
You may of course include a link to that file, if Perforce provides a way to link items. We use that a lot, to include content that is stored in Subversion, and document the standing, the usage, ... in Confluence then. The user has to click on that link to get that file, but I think it is necessary anyway, because your authorization rules are not known to Confluence.