Dokuwiki cannot support filename which contains uppercase letter(s)? - dokuwiki

There
Dokuwiki, an excellent wiki tool, but it fails to support filename which contains uppercase letter(s) when upload attachment.
Any idea to solve this?

DokuWiki aims to be as portable as possible. This means it allows you to move your existing wiki data between operating systems and file systems without problems. To achieve this (and cope with different file systems being case sensitive or not) it enforces some naming standards (all files being lower case being one of them).
When you upload the file through DokuWiki's media manager it will automatically be renamed to comply to the naming standards used within DokuWiki. If you upload via the file system you have to make sure your files comply manually.

Related

How to put a class-diagram under version control?

I want to upload a class-diagram to a public repository on GitHub.
Is there any tool which is considered to be a convention for this purpose?
Currently, I am using Google Docs, from which I can export a PDF.
Someone has suggested for me to use https://www.draw.io/, from which I can export an XML (which would be a lot more suitable for version control, since it is pure text), but I don't know whether or not this tool is "well accepted" across the community.
All versions control systems work with text files. PDF is not a text file. Forget it. It is the same as putting exe files under version control. VCSs work with source files, don't forget this.
All diagrams editors has inner representations of diagrams in some sort of text file. Eclipse UML editors use XML. So, the versions control systems can easily take these files and work with them.
The problem comes when you have conflicts. You will have to resolve them reading and understanding the inner language of the diagram representation. It could be very difficult.
So, it is possible, but try to minimize the conflicts.

Manage poedit synchronization with sources for multiple languages, command-line

I am using poedit to manage gettext translations in a php project. There are many people contributing to the project and each time some new language strings are added, I need to run poedit and synchronize with source for each language available, so that translators can then translate the language files. My questions are:
a) Do I have to do this every time for each language, or is there an easier way?
b) Is there a way to do this from command-line (so I can add it to a cron job for example)?
Thanks in advance
If this is a pure Gettext project, i.e. not relying on custom extraction provided by poedit then:
Yes you do need to do this every time for each language. But you could choose to use xgettext and create a POT file. Give the POT file to translators who can then do the updates themselves. I think poedit can update from a template within the UI.
You could do this yourself on the command line using xgettext to update your POT file. Then msgmerge to update each individual language file. But then you need to communicate with your translators as they might have been translating the file that you have just updated for them.
From a translators perspective you should try to limit your string churn and create a concept of a string freeze. To have to retranslate things is quite time consuming for translators and as you've discovered for yourself.

Are URLs to doxygen pages permanent

hey everyone, we just added a nightly action to process the entire source tree with doxygen and place the output onto development webserver.
We also already have a sharepoint structure which holds design documents for various modules/projects. Currently, the level at which we are keeping this documentation is relatively high. We discuss structures of modules and talk about the major classes, but never go down to the individual method level. I wanted to bridge that gap by having hyperlinks in the SDS word documents that would point to doxygen output.
I noticed the links look like this:
http://example.com/docs/ProjectName/d4/d98/class_c_reader.html
http://example.com/docs/ProjectName/d4/d16/class_c_stream.html
The part that sketches me out a bit is "d4", "d98" and "d16" strings in the path. If I copy these links and create the hyperlinks, does anyone know if these URLs are guaranteed be preserved in the future. As I said, entire doxygen output gets regenerated nightly.
You can disable the d4/d98 subdirectories by disabling CREATE_SUBDIRS in the doxygen configuration.
Whether the name of the HTML files will stay the same I do not know for sure but from what I have seen when using doxygen it seems so. If you want to know for sure you can always look at the doxygen source.
Probably these links will not stay permanent.
Furthermore, Doxygen has a XML representation of the generated documentation but even this interface resp. the corresponding DSD has been changed with new releases of doxygen. This is quite frustrating, as we had used the XML representation for a similar application with the assumption that the structures would be kept identical with every new release.

Do you version "derived" files?

Using online interfaces to a version control system is a nice way to have a published location for the most recent versions of code. For example, I have a LaTeX package here (which is released to CTAN whenever changes are verified to actually work):
http://github.com/wspr/pstool/tree/master
The package itself is derived from a single file (in this case, pstool.tex) which, when processed, produces the documentation, the readme, the installer file, and the actual files that make up the package as it is used by LaTeX.
In order to make it easy for users who want to download this stuff, I include all of the derived files mentioned above in the repository itself as well as the master file pstool.tex. This means that I'll have double the number of changes every time I commit because the package file pstool.sty is a generated subset of the master file.
Is this a perversion of version control?
#Jon Limjap raised a good point:
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
That's really the crux of the matter in this case. Yes, released versions of the package can be obtained from elsewhere. So it does really make more sense to only version the non-generated files.
On the other hand, #Madir's comment that:
the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes
is also rather pertinent in that if a user finds a bug and I fix it immediately, they can then head over to the repository and grab the file that's necessary for them to continue working without having to run any "installation" steps.
And this, I think, is the more important use case for my particular set of projects.
We don't version files that can be automatically generated using scripts included in the repository itself. The reason for this is that after a checkout, these files can be rebuild with a single click or command. In our projects we always try to make this as easy as possible, and thus preventing the need for versioning these files.
One scenario I can imagine where this could be useful if 'tagging' specific releases of a product, for use in a production environment (or any non-development environment) where tools required for generating the output might not be available.
We also use targets in our build scripts that can create and upload archives with a released version of our products. This can be uploaded to a production server, or a HTTP server for downloading by users of your products.
I am using Tortoise SVN for small system ASP.NET development. Most code is interpreted ASPX, but there are around a dozen binary DLLs generated by a manual compile step. Whilst it doesn't make a lot of sense to have these source-code versioned in theory, it certainly makes it convenient to ensure they are correctly mirrored from the development environment onto the production system (one click). Also - in case of disaster - the rollback to the previous step is again one click in SVN.
So I bit the bullet and included them in the SVN archive - the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes.
Not necessarily, although best practices for source control advise that you do not include generated files, for obvious reasons.
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
Normally, derived files should not be stored in version control. In your case, you could build a release procedure that created a tarball that includes the derived files.
As you say, keeping the derived files in version control only increases the amount of noise you have to deal with.
In some cases we do, but it's more of a sysadmin type of use case, where the generated files (say, DNS zone files built from a script) have intrinsic interest in their own right, and the revision control is more linear audit trail than branching-and-tagging source control.

What is the best/a very good meta-data reader library?

Right now, I'm particularly interested in reading the data from MP3 files (ID3 tags?), but the more it can do (eg EXIF from images?) the better without compromising the ID3 tag reading abilities.
I'm interested in making a script that goes through my media (right now, my music files) and makes sure the file name and directory path correspond to the file's metadata and then create a log of mismatched files so I can check to see which is accurate and make the proper changes. I'm thinking Ruby or Python (see a related question specifically for Python) would be best for this, but I'm open to using any language really (and would actually probably prefer an application language like C, C++, Java, C# in case this project goes off).
There is a great post on using PowerShell and TagLibSharp on Joel "Jaykul" Bennet's site. You could use TagLibSharp to read the metatdata with any .NET based language, but PowerShell is quite appropriate for what you are trying to do.
use exiftool (it supports ID3 too). written in perl, but can also be used from the command line. it has a compiled windows and mac version.
it is light-years ahead of any other metadata tool, supporting almost all known audio, video and image files, supports writing (not just reading), and knows about all the custom/extended tags used by software (such as photoshop) and hardware (many camera manufacturers).
#Thomas Owens PowerShell is now part of the Common Engineering Criteria (as of Microsoft's 2009 Product Line) and starting with Serve 2008 is included as a feature. It stands as much of a chance to be installed as Python or Ruby. You also mentioned that you were willing to go to C#, which could use TagLibSharp. Or you could use IronPython...
#Thomas Owens TagLibSharp is a nice library to use. I always lean to PowerShell first, one to promote the language, and two because it is spreading fast in the Microsoft domain. I have nothing against using other languages, I just lean towards what I know and like. :) Good luck with your project.
Further to Anon's answer - exiftool is very powerful and supports a huge range of file types, not just images, but video, audio and numerous document formats.
A Ruby interface for exiftool is available in the form of the mini_exiftool gem
see http://miniexiftool.rubyforge.org/