I have noticed that TYPO3 is not processing images unless they need resizing.
This means I'm getting unoptimised jpg files slowing down page speed.
Is there a way to tell it to process the image regardless of dimensions?
<f:media file="{file}" maxWidth="{dimensions.width}" />
If there is no proper way to do this, is there perhaps a filter I could apply at such a small level that it would not visually change anything but would force it to create a processed file?
Maybe it would be a solution to optimize the original files.
Once, if they are only images of your sitepackage/theme
Periodically via cronjob, if the images are uploaded by editors
Have a look to jpegoptim which can be used for bulk optimizing:
find . -iname "*.jp*g" -type f -print0 | xargs -0 jpegoptim -o --strip-all --max=90
Related
Is there a way to badge edit multiple .srt files. We have a project where recent edits to videos offset the .srt files by 5 seconds. I know how to timeshifts .srt on a single file, but I'm wondering if there is a way to timeshift 1000s of .srt files by 5 seconds.
Most command lines I'm aware off can do it file by file, but I haven't seen it work on folders.
This is an interesting challenge. You'd almost certainly have to write some short script to do this. Command line tools like sed and awk are great for text processing tasks like this, but the challenge I think you'll face is the timecode. It's not as simple as just adding 5 to the seconds field of each timecode because you might tip over the edge of a minute (i.e. 00:00:59.000 + 5 = 00:01:04.000). You'll have to write some custom code to handle this part of the problem as far as I know.
The rest is pretty straight forward, you just need a command like find . -name ".srt" | xargs the-custom-script-you-have-to-write.sh
Sorry it's not a more satisfying answer. I don't know of any existing utilities that do this.
How can you use local assets in a dart documentation comment? This works fine when using a webbased url,, like so:
/// ![A row of icons representing a pink heart, a green musical note, and a blue umbrella](https://flutter.github.io/assets-for-api-docs/assets/widgets/icon.png)
What I would like to do is reference some image assets I have in my assets folder and display that. Something like this:
///![](/assets/some/local/path.png)
///![](/assets/some/other/path.svg)
But I cannot get any relative path to work, is this possible at all?
It's not entirely satisfying, but I just made a shell script to copy images that are part of the documentation into the right place in the generated HTML tree, viz:
#!/bin/sh -x
cd `dirname $0`
dartdoc
cd lib
for f in `find . -name '*.comments' -print` ; do
dir=`basename $f .comments`
cp -r $f ../doc/api/$dir
done
cd ..
ObRant: It disappoints me that documentation generation tools often don't generally this (at least AFAIK). Visual communication can greatly improve documentation!
For example, consider this bit of Java. The UML is a big help in understanding the structure. http://spritely.jovial.com/javadocs/index.html?edu/calpoly/spritely/package-summary.html
I process PDF files in an existing Perl framework. Some of the incoming files have very large page sizes. What I would like to do:
Check if the input file's page size is larger than a specific format f (A4/Letter)
If it is larger than f: scale it down to f.
This is the corresponding command in Unix using GhostScript:
gs -sPAPERSIZE=a4 -dFIXEDMEDIA -dPSFitPage -o <outputFile> -sDEVICE=pdfwrite <inputFile>
Is there a way to do this within Perl, i.e. without requiring to call external tools?
I checked the modules PDF and CAM::PDF, but the documentation does not really cover my issue and I couldn't find a straight-forward solution.
I am trying to copy files from a directory that is in constant use by a security cam program. To archive these .jpg files to another HD, I first need to copy them. The problem is, the directory is being filled as the copying proceeds at the rate of about 10 .jpgs per second. I have the option of stopping the program, do the copy then start it again which is not what I want to do for many reasons. Or I could do the find/mtime approach. I have tried the following:
find /var/cache/zm/events/* -mmin +5 -exec cp -r {} /media/events_cache/ \;
Which under normal circumstances would work. But it seems the directories are also changing their timestamps and branch off in different directions so it never comes out logically and for some reason each directory is very deep like /var/cache/zm/events/../../../../../../../001.jpg x 3000. All I want to do is copy the files and directories via cron with a simple command line if possible. With the directories constantly changing, is there are way to make this copy without stopping the program?
Any insight would be appreciated.
rsync should be a better option in this case but you will need to try it out. Try setting it up at off peak hours when the traffic is not that high.
Another option would be setting up the directory on a volume which uses say mirroring or RAID 5 ; this way you do not have to worry about losing data (if that indeed is your concern).
I would like to view my CSV files in a column-aligned format from the command line, with something like less, but my CSV files are sometimes gigabytes big, and I'm using a little computer (Netbook, 1GB RAM, 8GB HD, 1GHz processor), so I don't want to waste a lot of memory or processing power viewing the file.
I mention that I'd like to use something like less because I would like to be able to navigate around within the file.
cat FILE | column -s, -t | less is one thought, but cat is still going to try to print the whole file and I'm not sure how much buffering the pipes will use (if any) or what sort of caching less employs.
This question is similar to this other question, but I'm specifically interested in viewing large files using minimal resources preferably already on the machine. I don't presently use VI or EMACS, and think they'd both be overkill here. VI, for instance, would be a 27MB install for a utility acting merely as a viewer.
First of all, less can open oversized files. Second, both vim (which I use with the Largefile plugin and with files over 8 GB) and emacs can do it.
But... Most of the time, viewing a big file in a 80x40 (or a bit bigger) terminal is useless... so you should filter it with something like (f)grep or process it with awk. If you want only the start or end, then there are head and tail.
HTH
Check the tail \ head commands.
Or even better, Download VIM source and compile it. That should be easy enough. Version 5.8 source is 1Mb before decompressing (4MB after). Enjoy.