Autonumbering equations in doxygen - doxygen

Is is possible to number automatic number equations in doxygen markdown documentation?
Example:
page1.md:
This is Equation 1:
\f[
\label{Eq:1}
\bbox[Pearl, 10px,border:1px solid black]
{
\rho = \frac{m}{V}
}
\f]
If I put a \tag{#numberOfTheEquation} I will get the number.
But has you build the document it is very inconvenient to change every equation number if one inserts a new equation in the middle of the document.
Best Regards!

When we have a file like:
/** \file
# Math references
\f{equation}{
\alpha = \beta * \gamma
\label{eq:my_system}
\f}
The reference: \f$\eqref{eq:my_system}\f$
*/
We can use a Doxyfile like:
USE_MATHJAX = YES
EXTRA_PACKAGES = amsmath amssymb
MATHJAX_EXTENSIONS = amssymb amsmath
MATHJAX_CODEFILE = mycode.js
with mycode.js:
MathJax.Hub.Config({
TeX: { equationNumbers: { autoNumber: "AMS" } }
});
This will result in:

Related

formula to pick every pixel in a bitmap without repeating

I'm looking for an algorithm, I am programming in swift now but pseudocode or any reasonably similar "C family" syntax will do.
Imagine a large list of values, such as pixels in a bitmap. You want to pick each one in a visually random order, one at a time, and never pick the same one twice, and always end up picking them all.
I used it before in a Fractal generator so that it was not just rendering line by line, but built it up slowly in a stochastic way, but that was long ago, in a Java applet, and I no longer have the code.
I do not believe it used any pseudo-random number generator, and the main thing I liked about it is that it did not make the rendering time take longer than the just line by line approach. Any of the shuffling algorithms I looked at would make the rendering take longer with such a large number of values to deal with, unless I'm missing something.
EDIT: I used the shuffling an array approach. I shuffle once when the app loads, and it does not take that long anyway. Here is the code for my "Dealer" class.
import Foundation
import Cocoa
import Quartz
class Dealer: NSObject
{
//########################################################
var deck = [(CGFloat,CGFloat)]()
var count = 0
//########################################################
init(_ w:Int, _ h:Int)
{
super.init()
deck.reserveCapacity((w*h)+1)
for y in 0...h
{
for x in 0...w
{
deck.append((CGFloat(x),CGFloat(y)))
}
}
self.shuffle()
}
//########################################################
func shuffle()
{
var j:Int = 0
let total:Int = deck.count-1
for i:Int in 0...total
{
j = Int(arc4random_uniform(UInt32(total)))
deck.swapAt(i, j)
}
}
//########################################################
func deal() -> (CGFloat,CGFloat)
{
let result = deck[count]
let total:Int = deck.count-1
if(count<total) { count=count+1 } else { count=0 }
return(result)
}
//########################################################
}
The init is called once, and it calls shuffle, but if you want you can call shuffle again if needed.
Each time you need a "card" you call Deal. It loops to the beginning when the "deck" is done.
if you got enough memory space to store all the pixel positions you can shuffle them:
const int xs=640; // image resolution
const int ys=480;
color pixel[sz]; // image data
const int sz=xs*ys; // image size
int adr[sz],i,j;
for (i=0;i<sz;i++) adr[i]=i; // ordered positions
for (i=0;i<sz;i++) // shuffle them
{
j = random(sz); // pseudo-randomness with uniform distribution
swap(pixel[i],pixel[j]);
}
this way you got guaranteed that each pixel is used once and most likely all of them are shuffled ...
You need to implement a pseudo-random number generator with a theoretically known period, which is greater than but very close to the number of elements in your list. Suppose R() is a function that implements such a RNG.
Then:
for i = 1...N
do
idx = R()
while idx > N
output element(idx)
end
If the period of the RNG is greater than N, this algorithm is guaranteed to finish, and never output the same element twice
If the period of the RNG is close to N, this algorithm will be fast (i.e. the do-while loop will mostly do 1 iteration).
If the RNG has good quality, the visual output will look pleasant; here you have to do experiments and decide what is good enough for you
To find a RNG that has an exactly-known period, you should examine theory on RNGs, which is very extensive (maybe too extensive); Wikipedia has useful links.
Start with Linear congruential generators: they are very simple, and there is a chance they will be of good enough quality.
Here's a working example based on linear feedback shift registers. Since an n-bit LFSR has a maximal sequence length of 2n−1 steps, this will work best when the number of pixels is one less than a power of 2. For other sizes, the pseudo-random coordinates are discarded until one is obtained that lies within the specified range of coordinates. This is still reasonably efficient; in the worst case (where w×h is a power of 2), there will be an average of two LSFR iterations per coordinate pair.
The following code is in Javascript, but it should be easy enough to port this to Swift or any other language.
Note: For large canvas areas like 1920×1024, it would make more sense to use repeated tiles of a smaller size (e.g., 128×128). The tiling will be imperceptible.
var lsfr_register, lsfr_mask, lsfr_fill_width, lsfr_fill_height, lsfr_state, lsfr_timer;
var lsfr_canvas, lsfr_canvas_context, lsfr_blocks_per_frame, lsfr_frame_rate = 50;
function lsfr_setup(width, height, callback, duration) {
// Maximal length LSFR feedback terms
// (sourced from http://users.ece.cmu.edu/~koopman/lfsr/index.html)
var taps = [ -1, 0x1, 0x3, 0x5, 0x9, 0x12, 0x21, 0x41, 0x8E, 0x108, 0x204, 0x402,
0x829, 0x100D, 0x2015, 0x4001, 0x8016, 0x10004, 0x20013, 0x40013,
0x80004, 0x100002, 0x200001, 0x400010, 0x80000D, 0x1000004, 0x2000023,
0x4000013, 0x8000004, 0x10000002, 0x20000029, 0x40000004, 0x80000057 ];
nblocks = width * height;
lsfr_size = nblocks.toString(2).length;
if (lsfr_size > 32) {
// Anything longer than about 21 bits would be quite slow anyway
console.log("Unsupposrted LSFR size ("+lsfr_size+")");
return;
}
lsfr_register = 1;
lsfr_mask = taps[lsfr_size];
lsfr_state = nblocks;
lsfr_fill_width = width;
lsfr_fill_height = height;
lsfr_blocks_per_frame = Math.ceil(nblocks / (duration * lsfr_frame_rate));
lsfr_timer = setInterval(callback, Math.ceil(1000 / lsfr_frame_rate));
}
function lsfr_step() {
var x, y;
do {
// Generate x,y pairs until they are within the bounds of the canvas area
// Worst-case for an n-bit LSFR is n iterations in one call (2 on average)
// Best-case (where w*h is one less than a power of 2): 1 call per iteration
if (lsfr_register & 1) lsfr_register = (lsfr_register >> 1) ^ lsfr_mask;
else lsfr_register >>= 1;
y = Math.floor((lsfr_register-1) / lsfr_fill_width);
} while (y >= lsfr_fill_height);
x = (lsfr_register-1) % lsfr_fill_width;
return [x, y];
}
function lsfr_callback() {
var coords;
for (var i=0; i<lsfr_blocks_per_frame; i++) {
// Fetch pseudo-random coordinates and fill the corresponding pixels
coords = lsfr_step();
lsfr_canvas_context.fillRect(coords[0],coords[1],1,1);
if (--lsfr_state <= 0) {
clearInterval(lsfr_timer);
break;
}
}
}
function start_fade() {
var w = document.getElementById("w").value * 1;
var h = document.getElementById("h").value * 1;
var dur = document.getElementById("dur").value * 1;
lsfr_canvas = document.getElementById("cv");
lsfr_canvas.width = w;
lsfr_canvas.height = h;
lsfr_canvas_context = lsfr_canvas.getContext("2d");
lsfr_canvas_context.fillStyle = "#ffff00";
lsfr_canvas_context.fillRect(0,0,w,h);
lsfr_canvas_context.fillStyle = "#ff0000";
lsfr_setup(w, h, lsfr_callback, dur);
}
Size:
<input type="text" size="3" id="w" value="320"/>
×
<input type="text" size="3" id="h" value="240"/>
in
<input type="text" size="3" id="dur" value="3"/>
secs
<button onclick="start_fade(); return 0">Start</button>
<br />
<canvas id="cv" width="320" height="240" style="border:1px solid #ccc"/>

Why is doxygen truncating a parameterized macro function attribute?

Given the following macro definition:
#if defined(__GNUC__) && __GNUC__ >= 4
#define WL_PRINTF(x, y) __attribute__((__format__(__printf__, x, y)))
#else
#define WL_PRINTF(x, y)
#endif
And given the following use, as a gcc function attribute:
typedef void (*wl_log_func_t)(const char *, va_list) WL_PRINTF(1, 0);
Doxygen seems to be truncating part of the function attribute, appearing like this:
And this:
Doxygen also truncates it similarly in other cases where I use the macro function attribute, so the problem seems consistent (it's not about this being a (typedef). It documents the macro itself just fine.
My .doxygen config is:
PROJECT_NAME = "Wayland"
PROJECT_NUMBER = 1.12.90
OUTPUT_DIRECTORY = ../../doc/doxygen
JAVADOC_AUTOBRIEF = YES
TAB_SIZE = 8
QUIET = YES
HTML_TIMESTAMP = YES
GENERATE_LATEX = NO
MAN_LINKS = YES
PREDEFINED = WL_EXPORT=
MACRO_EXPANSION = YES
EXPAND_ONLY_PREDEF = YES
DOT_MULTI_TARGETS = YES
ALIASES += comment{1}="/* \1 *<!-- -->/"
OPTIMIZE_OUTPUT_FOR_C = YES
EXTRACT_ALL = YES
EXTRACT_STATIC = YES
GENERATE_HTML = NO
GENERATE_XML = NO
GENERATE_MAN = NO
Is there some neat way to trick Doxygen into not truncating this?
This seems to be a constraint (pronounced 'bug') in doxygen that causes it to truncate. The alternative is to remove the generation of the function attribute entirely, by adding the macro to the PREDEFINED config.

Modify threshold in ReLU in Caffe framework

I am new to Caffe, and now I need to modify the threshold values in ReLU layers in a convolution neural network. The way I am using now to modify thresholds is to edit the C++ source code in caffe/src/caffe/layers/relu_layer.cpp, and recompile it. However, this will change the threshold value to a specified value every time when ReLU is called. Is there any way to use different value as threshold in each ReLU layers in a network? By the way, I am using the pycaffe interface and I cannot find such a way to do so.
Finally, sorry for my poor English, if there are something unclear, just let me know, I'll try to describe it in detail.
If I understand correctly your "ReLU with threshold" is basically
f(x) = x-threshold if x>threshold, 0 otherwise
You can easily implement it by adding a "Bias" layer which subtract threshold from the input just prior to a regular "ReLU" layer
Yes, you can. In src/caffe/proto, add a line:
message ReLUParameter {
...
optional float threshold = 3 [default = 0]; #add this line
...
}
and in src/caffe/layers/relu_layer.cpp, make some small modifications as:
template <typename Dtype>
void ReLULayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
...
Dtype threshold = this->layer_param_.relu_param().threshold(); //add this line
for (int i = 0; i < count; ++i) {
top_data[i] = (bottom_data[i] > threshold) ? (bottom_data[i] - threshold) :
(negative_slope * (bottom_data[i] - threshold));
}
}
template <typename Dtype>
void ReLULayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down,
const vector<Blob<Dtype>*>& bottom) {
if (propagate_down[0]) {
...
Dtype threshold = this->layer_param_.relu_param().threshold(); //this line
for (int i = 0; i < count; ++i) {
bottom_diff[i] = top_diff[i] * ((bottom_data[i] > threshold)
+ negative_slope * (bottom_data[i] <= threshold));
}
}
}
and similarly in src/caffe/layers/relu_layer.cu the code should be like this.
And after compiling your caffe and pycaffe, in your net.prototxt, you can write a relu layer like:
layer {
name: "threshold_relu"
type: "ReLU"
relu_param: {threshold: 1 #e.g. you want this relu layer to have a threshold 1}
bottom: "input"
top: "output"
}

Matlab; how to extract information from a header's file (text file)

I have many text files that have 35 lines of header followed by a large matrix with data of an image (that info can be ignored and do not need to read it at the moment). I want to be able to read the header lines and extract information contained on those lines. For instance the first few lines of the header are..
File Version Number: 1.0
Date: 06/05/2015
Time: 10:33:44 AM
===========================================================
Beam Voltage (-kV) = 13.000
Filament (W) = 4.052
Cond. (-kV) = 8.885
CenterX1 (V) = 10.7
CenterY1 (V) = -45.9
Objective (%) = 71.40
OctupoleX = -0.4653
OctupoleY = -0.1914
Angle (deg) = 0.00
.
I would like to be able to open this text file and read the vulue of the day and time the file was created, filament power, the condenser voltage, the angle, etc.. and save these in variables or send them to a text box on a GUI program.
I have tried several things but since the values I want to extract some times are after a '=' or after a ':' or simply after a '' then I do not know how to approach this. Perhaps reading each line and look for a match of a word?
Any help would be much appreciated.
Thanks,
Alex
This is not particularly difficult, and one of the ways to do it would be to parse line-by-line as you suggested. Something like this:
MAX_LINES_TO_READ = 35;
fid = fopen('input.txt');
lineCount = 0;
dateString = '';
beamVoltage = 0;
while ~eof(fid)
line = fgetl(fid);
lineCount = lineCount + 1;
%//check conditions for skipping loop body
if isempty(line)
continue
elseif lineCount > MAX_LINES_TO_READ
break
end
%//find headers you are interested in
if strfind(line, 'Date')
%//find the first location of the header separator
idx = find(line, ':', 1);
%//extract substring starting from 1 char after separator
%//note: the trim is to get rid of leading/trailing whitespace
dateString = strtrim(line(idx + 1 : end));
elseif strfind(line, 'Beam Voltage')
idx = find(line, '=', 1);
beamVoltage = str2double(line(idx + 1 : end));
end
end
fclose(fid);

Preserving colors during CMYK to RGB transformation in PIL

I'm using PIL to process uploaded images. Unfortunately, I'm having trouble with color conversion from CMYK to RGB, as the resulting images tone and contrast changes.
I'd suspect that it's only doing direct number transformations. Does PIL, or anything built on top of it, have an Adobian dummy-proof consume embedded profile, convert to destination, preserve numbers tool I can use for conversion?
In all my healthy ignorance and inexperience, this sort of jumped at me and it's got me in a pinch. I'd really like to get this done without engaging any intricacies of color spaces, transformations and the necessary math for both at this point.
Though I've never previously used it, I'm also disposed at using ImageMagick for this processing step if anyone has experience that it can perform it in a gracious manner.
So it didn't take me long to run into other people mentioning Little CMS, being the most popular open source solution for color management. I ended snooping around for Python bindings, found the old pyCMS and some ostensible notions about PIL supporting Little CMS.
Indeed, there is support for Little CMS, it's mentioned in a whole whopping one-liner:
CMS support: littleCMS (1.1.5 or later is recommended).
The documentation contains no references, no topical guides, Google didn't crawl out anything, their mailing list is closed... but digging through the source there's a PIL.ImageCms module that's well documented and get's the job done. Hope this saves someone from a messy internet excavation.
Goes off getting himself a cookie...
it's 2019 and things have changed. Your problem is significantly more complex than it may appear at first sight. The problem is, CMYK to RGB and RGB to CMYK is not a simple there and back. If e.g. you open an image in Photoshop and convert it there, this conversion has 2 additional parameters: source color profile and destination color profile. These change things greatly! For a typical use case, you would assume Adobe RGB 1998 on the RGB side and say Coated FOGRA 39 on the CMYK side. These two additional pieces of information clarify to the converter how to deal with the colors on input and output. What you need next is a transformation mechanism, Little CMS is in deed a great tool for this. It is MIT licensed and (after looking for solutions myself for a considerable time), I would recommend the following setup if you indeed do need a python way to transform colors:
Python 3.X (necessary because of littlecms)
pip install littlecms
pip install Pillow
In littlecms' /tests folder you will find a great set of examples. I would allow myself a particular adaptation of one test. Before you get the code, please let me tell you something about those color profiles. On Windows, as is my case, you will find a set of files with an .icc extension in the folder C:\Windows\System32\spool\drivers\color where Windows stores it's color profiles. You can download other profiles from sites like https://www.adobe.com/support/downloads/iccprofiles/iccprofiles_win.html and install them on Windows simply by double-clicking the corresponding .icc file. The example I provide depends on such profile files, which Little CMS uses to do those magic color transforms. I work as a semi-professional graphics designer and needed to be able to convert colors from CMYK to RGB and vice versa for certain scripts that manipulate objects in InDesign. My setup is RGB: Adobe RGB 1998 and CMYK: Coated FOGRA 39 (these settings were recommended by most book printers I get my books printed at). The aforementioned color profiles generated very similar results for me to the same transforms made by Photoshop and InDesign. Still, be warned, the colors are slightly (by around 1%) off in comparison to what PS and Id will give you for the same inputs. I am trying to figure out why...
The little program:
import littlecms as lc
from PIL import Image
def rgb2cmykColor(rgb, psrc='C:\\Windows\\System32\\spool\\drivers\\color\\AdobeRGB1998.icc', pdst='C:\\Windows\\System32\\spool\\drivers\\color\\CoatedFOGRA39.icc') :
ctxt = lc.cmsCreateContext(None, None)
white = lc.cmsD50_xyY() # Set white point for D50
dst_profile = lc.cmsOpenProfileFromFile(pdst, 'r')
src_profile = lc.cmsOpenProfileFromFile(psrc, 'r') # cmsCreate_sRGBProfile()
transform = lc.cmsCreateTransform(src_profile, lc.TYPE_RGB_8, dst_profile, lc.TYPE_CMYK_8,
lc.INTENT_RELATIVE_COLORIMETRIC, lc.cmsFLAGS_NOCACHE)
n_pixels = 1
in_comps = 3
out_comps = 4
rgb_in = lc.uint8Array(in_comps * n_pixels)
cmyk_out = lc.uint8Array(out_comps * n_pixels)
for i in range(in_comps):
rgb_in[i] = rgb[i]
lc.cmsDoTransform(transform, rgb_in, cmyk_out, n_pixels)
cmyk = tuple(cmyk_out[i] for i in range(out_comps * n_pixels))
return cmyk
def cmyk2rgbColor(cmyk, psrc='C:\\Windows\\System32\\spool\\drivers\\color\\CoatedFOGRA39.icc', pdst='C:\\Windows\\System32\\spool\\drivers\\color\\AdobeRGB1998.icc') :
ctxt = lc.cmsCreateContext(None, None)
white = lc.cmsD50_xyY() # Set white point for D50
dst_profile = lc.cmsOpenProfileFromFile(pdst, 'r')
src_profile = lc.cmsOpenProfileFromFile(psrc, 'r') # cmsCreate_sRGBProfile()
transform = lc.cmsCreateTransform(src_profile, lc.TYPE_CMYK_8, dst_profile, lc.TYPE_RGB_8,
lc.INTENT_RELATIVE_COLORIMETRIC, lc.cmsFLAGS_NOCACHE)
n_pixels = 1
in_comps = 4
out_comps = 3
cmyk_in = lc.uint8Array(in_comps * n_pixels)
rgb_out = lc.uint8Array(out_comps * n_pixels)
for i in range(in_comps):
cmyk_in[i] = cmyk[i]
lc.cmsDoTransform(transform, cmyk_in, rgb_out, n_pixels)
rgb = tuple(rgb_out[i] for i in range(out_comps * n_pixels))
return rgb
def rgb2cmykImage(PILImage, psrc='C:\\Windows\\System32\\spool\\drivers\\color\\AdobeRGB1998.icc', pdst='C:\\Windows\\System32\\spool\\drivers\\color\\CoatedFOGRA39.icc') :
ctxt = lc.cmsCreateContext(None, None)
white = lc.cmsD50_xyY() # Set white point for D50
dst_profile = lc.cmsOpenProfileFromFile(pdst, 'r')
src_profile = lc.cmsOpenProfileFromFile(psrc, 'r')
transform = lc.cmsCreateTransform(src_profile, lc.TYPE_RGB_8, dst_profile, lc.TYPE_CMYK_8,
lc.INTENT_RELATIVE_COLORIMETRIC, lc.cmsFLAGS_NOCACHE)
n_pixels = PILImage.size[0]
in_comps = 3
out_comps = 4
n_rows = 16
rgb_in = lc.uint8Array(in_comps * n_pixels * n_rows)
cmyk_out = lc.uint8Array(out_comps * n_pixels * n_rows)
outImage = Image.new('CMYK', PILImage.size, 'white')
in_row = Image.new('RGB', (PILImage.size[0], n_rows), 'white')
out_row = Image.new('CMYK', (PILImage.size[0], n_rows), 'white')
out_b = bytearray(n_pixels * n_rows * out_comps)
row = 0
while row < PILImage.size[1] :
in_row.paste(PILImage, (0, -row))
data_in = in_row.tobytes('raw')
j = in_comps * n_pixels * n_rows
for i in range(j):
rgb_in[i] = data_in[i]
lc.cmsDoTransform(transform, rgb_in, cmyk_out, n_pixels * n_rows)
for j in cmyk_out :
out_b[j] = cmyk_out[j]
out_row = Image.frombytes('CMYK', in_row.size, bytes(out_b))
outImage.paste(out_row, (0, row))
row += n_rows
return outImage
def cmyk2rgbImage(PILImage, psrc='C:\\Windows\\System32\\spool\\drivers\\color\\CoatedFOGRA39.icc', pdst='C:\\Windows\\System32\\spool\\drivers\\color\\AdobeRGB1998.icc') :
ctxt = lc.cmsCreateContext(None, None)
white = lc.cmsD50_xyY() # Set white point for D50
dst_profile = lc.cmsOpenProfileFromFile(pdst, 'r')
src_profile = lc.cmsOpenProfileFromFile(psrc, 'r')
transform = lc.cmsCreateTransform(src_profile, lc.TYPE_CMYK_8, dst_profile, lc.TYPE_RGB_8,
lc.INTENT_RELATIVE_COLORIMETRIC, lc.cmsFLAGS_NOCACHE)
n_pixels = PILImage.size[0]
in_comps = 4
out_comps = 3
n_rows = 16
cmyk_in = lc.uint8Array(in_comps * n_pixels * n_rows)
rgb_out = lc.uint8Array(out_comps * n_pixels * n_rows)
outImage = Image.new('RGB', PILImage.size, 'white')
in_row = Image.new('CMYK', (PILImage.size[0], n_rows), 'white')
out_row = Image.new('RGB', (PILImage.size[0], n_rows), 'white')
out_b = bytearray(n_pixels * n_rows * out_comps)
row = 0
while row < PILImage.size[1] :
in_row.paste(PILImage, (0, -row))
data_in = in_row.tobytes('raw')
j = in_comps * n_pixels * n_rows
for i in range(j):
cmyk_in[i] = data_in[i]
lc.cmsDoTransform(transform, cmyk_in, rgb_out, n_pixels * n_rows)
for j in rgb_out :
out_b[j] = rgb_out[j]
out_row = Image.frombytes('RGB', in_row.size, bytes(out_b))
outImage.paste(out_row, (0, row))
row += n_rows
return outImage
Something to note for anyone implementing this: you probably want to take the uint8 CMYK values (0-255) and round them into the range 0-100 to better match most color pickers and uses of these values. See my code here: https://gist.github.com/mattdesl/ecf305c2f2b20672d682153a7ed0f133