From what I understand, jpegtran is included with libjpeg-turbo and it is used when saving an Image with optimize=True. Jpegoptim uses an identical algorithm as jpegtran, but requires the image to be temporarily saved to disk before it can be optimized, and jpegoptim has the additional ability to compress lossly.
Looking at the mozjpeg repo on Github, there are a lot of references to libjpeg-turbo, and it has way more stars, so my question is how are they related? Is mozjpeg a fork of libjpeg-turbo that does everything and more? I.e., is it possible to disable certain features in mozjpeg and end up with identical performance and results as libjpeg-turbo?
Yes, mozjpeg is a fork of libjpeg-turbo. In mozjpeg v1.0, they confirm it.
Actually it's a combination of three technologies (progressive JPEG encoding, jpgcrush, and trellis quantization) to reduce the size of JPEG images. Progressive JPEG is supported in libjpeg-turbo, but jpgcrush and trellis quantization are not.
mozjpeg's implementation of the libjpeg API includes an extensibility framework that allows new features to be added without modifying the transparent libjpeg compress/decompress structures.
The JPEG files mozjpeg generates require much more time to compress than to decompress. When the default settings are used, mozjpeg is
considerably slower than libjpeg-turbo or even libjpeg at compressing images. Thus, it is not generally suitable for real-time compression. It is best used as part of a web encoding workflow.
Find more information from here.
Related
I am reading about fuzzing. I have some basic questions regarding fuzzing. I searched but couldn't find any good explanation.
Why image files are popular and common for fuzzing? What is the benefit of using image files?
Why png files are popular and common for fuzzing?
Why Libpng is popular and common for fuzzing?
Is it best to fuzz png images with libpng for beginners? Why?
If someone can answer, it will be very helpful for me.
Thank you in advance.
You fuzz not image files, but software that parses these. Typically
developers don't write code to parse images, but use third party
libraries like libpng. As a developer you don't need to fuzz third
party libraries, only the code of your project. As a security
engineer you can fuzz them.
It is easy to setup fuzzing of such an opensource library - you can build it
statically instrumented, create a small application that calls into
it and fuzz it with an easy to setup fuzzer like afl. This and the
fact that such libraries are widely used, thus errors in these can
have big impact on a lot of applications, make them a good target
for fuzzing.
But image files are not the only files that are widely used and have
popular libraries to handle them. Most fuzzers are unaware of input
structure of the tested binary. They mostly use input mutation
techniques at bit/byte level - changing values of some bit/byte of
the input, feeding it to the tested application and watching it's
behaviour. When the input is highly structured, a fuzzer fails to
test deep into the code. For example to test a browser feeding
html-files to it, requires a fuzzer to create inputs that have
correct lexical and syntactical structure. Typically the code for
lexical/syntax handling is autogenerated based on a language
grammar. By changing bits/bytes in html you most likely get bad
keywords, which would be rejected by such an autogenerated code,
thus testing mostly this code and not getting deeper. Image files
are typically not highly structured and easier to fuzz deeply, thus
can be fuzzed with better coverage.
It is also faster to fuzz a small input than a bigger one - less
bits to change. It's easy to create a small image file just by
taking a small image as a seed, than for example an html-file.
I don't know if png files are more popular for fuzzing than other binary media files, but they structure can include multiple headers/chunks of different types which results in more different handling paths in the code and thus makes it more likely to have errors.
As I said it's opensource, widely used, easy to set up and can be fuzzed fast - it's much faster to run a small application, than for example a browser.
I'm not sure there can be a 'best' criteria, but it's easy and therefore good for beginners.
For my non-commercial, low-traffic web site, I successfully use Leaflet with standard raster tile layers from well-known sources.
I'd like to add additional layers containing very localized high-resolution maps. I've succeeded in making a usable raster tile-set from such a map, hosting the tiles on my own server, and adding that as an additional layer. But this creates a huge file count. My cheap shared-hosting account promises unlimited storage but limits file (actually, inode) counts. If I add one more such tile-set, I risk getting thrown off my server.
Clearly I can look for a hosting account with higher limits, and I'm exploring Cloud alternatives, too. (Comments welcome!)
Any other ideas? Are there free or very low-cost alternatives for non-commercial ventures to use for low-traffic tile storage?
Or: As I look at the localized, high-resolution maps – I see I could fairly easily trace them to create vector artwork without much loss of data -- and some gains in clarity. I use Adobe Illustrator. Is there a reasonably painless way to get from an .ai file (or some similar vector format) to a Leaflet layer? With a substantially lower file count compared to the raster alternative?
Apologies if I've misused terminology --please correct me-- or if I've cluelessly missed some incredibly obvious way of solving this problem.
TIA
This sounds like a good use case for Leaflet.TileLayer.MBTiles plugin: (demo)
A LeafletJS plugin to load tilesets in .mbtiles format.
The idea is to write your Tiles into a single .mbtiles file (actually an SQLite database), so that you just need to host that single file on your server, instead of your thousands of individual tiles.
The drawback is that visitors now need to download the entire file before the map can actually display your tiles. But then navigation is extremely smooth, since tiles no longer need to be fetched from the network, but are all locally available to the browser.
As for generating the .mbtiles file, there are many implementations that can do the job.
I am building a Unity application in which i need to use a h.264 encoded stream. The data will be read over the network and shown on the screen (feed).
I Have worked with ffmpeg, but well context close is causing a problem so i was thinking of making my own h264.dll decoder.
I cannot find anything to where to start, everyone has SoC solutions to hardware decoders there are encoders available, any code/base or reference available on where to start with ?
i'v used ffmpeg its good but well its becoming big to ship since all those dll's are a bit bulky, so i want to do like one dll which will be the decoder and then whatever dependencies are required can be shipped.
If your license permits GPL, then you can compile ffmpeg (libav*) yourself eliminating all the unnecessary dependencies. (./configure --help will show you what options are available). Or you can license a decoder from a company like corecodec. Creating your own decoder is a no go. The spec is about 1000 pages long, and assumes you are familiar with advanced maths.
I have developed a Windows application that captures video from an external device using DirectShow. The image resolution is 640x480 and the videos saved without compression have very huge sizes (approx. 27MB per second).
My goal is to reduce this size as much as possible, so I am looking for an encoder which will allow me to compress the video in real-time. It could be H.264, MPEG-2 or anything else. It must allow me to save the video to disk and it would be best if I also could stream it in real-time over network (Wi-Fi, so the size should be around 1MB per second, or less). The significant quality loss would be unacceptable.
I have found out that getting an appropriate DirectShow filter for this task is very difficult. It can be assumed that the client machine is reasonably modern (fast 2-core CPU) and can utilize CUDA/OpenCL. There are a few apps that allow to encode video using CUDA and offer good performance, however I have not found an appropriate DirectShow filter or an API which could be used to develop one. The NVIDIA nvcuvenc.dll seems to have private API so I am unable to use it directly. Any CPU-based encoders I have found are too slow for my requirements, but maybe I have missed some.
Could anybody recommend me a solution, i.e. an encoder (paid or free, that can be used in an closed-source app) that can achieve a good performance, regardless whether it is using CPU/CUDA/OpenCL or DirectCompute? Or maybe I should use some external hardware video encoder?
Best regards,
madbadger
Since you're using Directshow, by far the easiest thing to do would be to use WMV9 inside an ASF container. This is easier because it's available on almost all Windows machines (very few install time dependencies), decently fast (you should have no issues using it on a reasonably modern machine) and the quality is reasonable. But considering your limit is 8 mbit/sec (1 MB/sec), quality isn't an issue for you. 2 mbit/sec, VGA-resolution WMV9 should look quite good.
It's not nearly as good as a decent implementation of H264, but from an implementation standpoint, you're going to save yourself a lot of time by going this route.
See this:
http://msdn.microsoft.com/en-us/library/dd375008%28v=VS.85%29.aspx
Which filters have you tried?
If you're only dealing with 640x480, then any reasonable-quality commercial software-based encoder should be fine as long as you choose a realistic bitrate. Hardware acceleration using Cuda or OpenCL shouldn't be required. H264 takes a bit more horse-power and would require more CPU cores, but Mpeg2 or any of the h263-era codecs (xvid, wmv9, divx, etc) should have no problems even on a modest CPU. Streaming it over the network at the same time takes a little more effort, but should still be possible.
It's not DirectShow-based, but VLC Media Player can do all this. It's based on the FFMpeg open-source project. Some versions of it are LGPL-licensed, so the library could incorporated into your project without many restrictions.
If you just want a set of DirectShow filters that will handle all this for you, I've had good results with MainConcept's products before. They're at the expensive end of the spectrum though.
You dont specify what filters you've tried, or what 'significant' quality loss means, so about the best I suspect we can do is suggest some encoders to try to see if they meet your requirements.
Two good ones are the Theora and WebM video encoder filters (you can get them from a single installer at xiph.org). They're both high quality encoders which can be tweaked to balance performance vs quality. WebM can use multiple processors when encoding, which might help in your situation. Both are also used w/ HTML5 video, so that might be an extra plus for you.
Forget about WMV encoding for realtime streaming. WMV works well for realtime low quality streams, but it doesn't do high quality encoding in realtime.
I suggest that you take a look at MainConcept's SDK. They do a series of DirectShow filters for encoding H.264. I've implemented realtime streaming and muxing of streams encoded in H.264 using MainConcept's codec and DirectShow filters, and it's great.
Hope this helps
I am using Windows Media Encoder for real-time encoding, and it works well even in resolution 720x576. One such example of it's usage is in VideoPhill Recorder.
It is written in pure .NET with DirectShow.NET for capturing and windowsMedia.NET for encoding.
Using those two I am able to achieve real-time encoding with 24/7 stability.
And both libraries are free to use on Windows, so you won't have to pay any licenses except for OS.
ffdshow tryouts leverage ffmpeg's x264 stuff, which is said to be pretty fast (I think so anyway). Also libjpeg-turbo might help, or choosing some other codec made for high throughput like camstudio's or what not.
update: ffmpeg can take directshow input now: http://ffmpeg.zeranoe.com/forum/viewtopic.php?f=3&t=27
Have you seen this yet?
http://www.streamcoders.com/products/rtpavsource.html
http://www.streamcoders.com/products/rtpavrender.html
If you can stay at or below 1280x1024, Micorsofts MPEG-2 encoder (included in Vista and up) is quite good.
I haven't gotten it to work for 1080p content at all though. I suspect the encoder just aborts on that. Shame.
Here is one option : http://www.codeproject.com/Articles/421869/H-264-CUDA-Encoder-DirectShow-Filter-in-Csharp
It uses about 10% of my cpu (p4 3ghz) to encode a SD video to h264 in graph edit.
See the CaptureDS C# sample that comes with AVBlocks. It shows how to build a video recorder with AVBlocks and DirectShow. DirectShow is used for video capture and AVBlocks is used for video encoding:
I don't just mean publish, but pretty much everything between when the pure coding is finished and the first version is released. For example, how do games make it so that their save files are hidden/unhackable, how do they include their resources within the game as opposed to having a resource file containing all of the sprites, etc., how do they make it so that there are special file extensions like .rect and .screen_mode, and so on and so forth.
So does anyone know any good books, articles, websites, etc. that explain the process between completing the pure code for a game and the release of it?
I don't think developers make much of an effort to ensure saves are hidden or unhackable. PC games usually just save out to a folder, one file per save, and any obfuscation is likely the result of using a binary file format (which requires some level of effort to reverse-engineer) or plaintext values that aren't very meaningful out of context, but not deliberate attempts to circumvent hacking. There are probably a ton of PC games that have shipped with very easily hackable text or XML save files, but I've never been a save hacker so I don't have any specific examples. On consoles the save files are going to a memory card or the console's hard drive, which makes them inherently inconvenient to access, but beyond that I don't think console developers make much of an effort to encrypt or otherwise obfuscate save data. That energy would more likely be directed towards securing the game against cheating if it's on online game or just making other systems work better.
Special file extensions come from just using your own extensions and/or defining your own file formats. You can use any extension for any file, so there are tons of "special" file formats that are just text files with a different extension, I've done this plenty of times myself. In other cases, if they have defined their own binary file format, that means they also have their own file parsers to process those files at runtime.
I don't know what platforms you have in mind, but for PC and console games, resources are not embedded in the executable. You will generally see a separate executable and then various archives and configuration files. Depending on the game, it may be a single resource pack, or perhaps a handful of packs for related resources like graphics, sound, level data, etc. As a general observation console games are more aggressively archived (to minimize file operations on slow optical media, and perhaps to overcome limitations of the native file systems on more primitive platforms). Some PC games have very loose assets, with even script files hanging out in the open.
If you develop for Windows or XBox 360, Microsoft might offer some help here. Check out their Game Development tools for Visual Studio C++ Express Edition.
If you are looking for books the Game Development Essentials series should answer your questions.
For circumventing saved file modifications, you can implement a simple encryption algorithm and use it to encrypt saved files, and then decrypt them when loading. File extensions are simply a matter of choice.
To use special file extensions in your game, just do the following:
Create some files in a format of your choice that have that extension, and then
write some code that knows how to read that format, and point it at those files.
File extensions are conventions, nothing more; there's nothing magic about them.
ETA: As for embedding resources, there are a few different ways to approach that problem. One common technique is to keep all your resources bundled together in a small number of files - maybe only one (Guild Wars takes that approach).
At the other extreme, you can leave your resources spread across many files in a directory tree, maybe in a custom format that requires special tools to modify, and maybe not. Civilization 4 does things this way, as do all the Turbine games I'm familiar with. This is a matter of taste, and not very important either way.
I think a better solution is two break your images in tiles of some known size and then join them back to back in some random order in a new file. This random order is only known to you and hence only you know how to jumble the tiles to get the original image back.
The approach would be to maintain a single dimensional array and maintains the position of tiles in it. Know use the crop functions of MIDP to extract each tile and render each tile back to the console.
If you need, I can post the code for you.
I would suggest to check the presentation from the developers of World of Goo (great game):
http://2dboy.com/public/eyawtkagibwata.pdf.