I'm trying to use export_fig (latest version altmany-export_fig-5ad44c4) with Matlab2016b (Windows10, 64 bit, gs9.20). Settings of my graphics card are given below.
plot(cos(linspace(0, 7, 1000)));
export_fig('test1.png','test1.pdf')
print('test2.pdf','-dpdf')
The png-file is generated but the pdf-file contains only a gray box. I don't get any error message from export_fig. The native Matlab print command produces a correct pdf-file which has a lot of white space around the plot. To get rid of it I'd like to use export_fig.
Any help would be greatly appreciated. Thanks a lot.
Cheers
Heiner
opengl info
Version: '4.5.13399 Compatibility Profile Context 15.201.1151.1008'
Vendor: 'ATI Technologies Inc.'
Renderer: 'AMD RADEON HD 6450'
RendererDriverVersion: '15.201.1151.1008'
RendererDriverReleaseDate: '04-Nov-2015'
MaxTextureSize: 16384
Visual: 'Visual 0x08, (RGBA 32 bits (8 8 8 8), Z depth 16 bits, Hardware acceleration, Double buffer, Antialias 4 samples)'
Software: 'false'
HardwareSupportLevel: 'full'
SupportsGraphicsSmoothing: 1
SupportsDepthPeelTransparency: 1
SupportsAlignVertexCenters: 1
Extensions: {266×1 cell}
MaxFrameBufferSize: 16384
Related
During the answering this question and issues, unrelated to the question, with creating figures with really high height I've found that figures are cropped. If 'units' properties of figure's children is set to 'normalized' the appropriate child is shrunk rather than cropped.
The question is, why is the height of figure limited and what (property) rules that limit. It looks like I'm limitted to figure height of 9.94" (Dell Latitude E5500; Win7 enterpise, 32-bit; matlab 2011b; resolution 1400x900 px)
Edit
I tried this:
>> set(gcf,'position',[10 10 600 600]),get(gcf,'position')
ans =
10.0000 10.0000 28.3333 9.8750
>> set(gcf,'position',[0 0 600 600]),get(gcf,'position')
ans =
0 0 600 600
The figure obtained by export_fig is in both cases 28.35" x 9.88" as measured in Adobe acrobat 9 Pro.
I suspect it has to do with with the maximum display size detected by Matlab and the pixel density of your system.
On my Matlab R2013a, Windows 7, with a screen 1900x1200, I can get a bigger figure than you, but it will still be truncated:
%% // MATLAB R2013A - Windows 7, 1900x1200pixels
set(gcf,'units','inches','position',[1 -5 6 15])
get(gcf,'position')
get(gcf,'OuterPosition')
returns:
ans =
1.00 -5.00 6.00 11.81
ans =
0.92 -5.08 6.17 12.71
My maximum vertical figure size was cut at 11.81 inches. Now that is the inside of a Matlab figure. The real size including the title bar and borders is given by the property OuterPosition.
Now consider:
>> get(0,'ScreenSize')
ans =
1.00 1.00 1920.00 1200.00
>> get(0,'ScreenPixelsPerInch')
ans =
96.00
If we do 1200pixel/96ppi=12.5. With this screen density, Matlab can only display 12.5 inches worth of graphics. This will be even mode obvious if you set the unit to 'Pixels':
set(gcf,'units','inches','position',[1 -5 6 15])
set(gcf,'units','Pixels')
get(gcf,'position')
get(gcf,'OuterPosition')
ans =
97.00 -479.00 576.00 1134.00
ans =
89.00 -487.00 592.00 1220.00
The figure was truncated at exactly 1220pixels (the inches unit is just a conversion, Matlab base unit will work in pixels). I suspect the extra 20 pixels allowed are an extra allowance for the title bar.
Now with your numbers, I do not have the outerposition of your figure, but even the figure inner position does match roughly your screen dimension (900px*96ppi=9.375inches). Try to force the units back to Pixels, get the OuterPosition of the figure and I wouldn't be surprised if you get 920pixels.
Now it seems you only need to worry about that for older versions of Matlab. On the same machine (Win 7, 1900x1200px), with Matlab R2015b, no more automatic cropping:
%% // MATLAB R2015B - Windows 7, 1900x1200pixels
set(gcf,'units','inches','position',[1 -5 6 15])
get(gcf,'position')
get(gcf,'OuterPosition')
ans =
1.00 -5.00 6.00 15.00
ans =
0.92 -5.08 6.17 15.40
set(gcf,'units','Pixels')
get(gcf,'position')
get(gcf,'OuterPosition')
ans =
97.00 -479.00 576.00 1440.00
ans =
89.00 -487.00 592.00 1478.00
The new graphic engine of Matlab seem to have lifted that restriction, my figure is now bigger that my screen size (whether you look at pixels or inches).
There is a captcha which I'm trying to solve, I know its always digits.
When i try the command tesseract cap.png cap it returns empty page !!!
When I try command tesseract cap.png cap -psm 6 digits && cat cap.txt it returns:
[root#usa1 ~]# tesseract cap.png cap -psm 7 digits && cat cap.txt
Tesseract Open Source OCR Engine v3.05.00dev with Leptonica
Info in pixReadStreamPng: converting (cmap + alpha) ==> RGBA
Info in pixReadStreamPng: converting 8 bpp cmap with alpha ==> RGBA
41-1-8 5
and also:
[root#usa1 ~]# tesseract cap.png cap -psm 7 digits && cat cap.txt
Tesseract Open Source OCR Engine v3.05.00dev with Leptonica
Info in pixReadStreamPng: converting (cmap + alpha) ==> RGBA
Info in pixReadStreamPng: converting 8 bpp cmap with alpha ==> RGBA
7 5
The captcha sample is :
The main goal is to achieve an accurate result, and also I noticed that running the same command twice wont make any difference on result so I wont be able to run it for example 3 times and compare the different results right?
And as for empty page error I guess somehow i need to make the quality of png file higher, am I wrong?
I have a study project in microcontroller applications. There is a board with a 2 x 2 magnetic sensors and a magnet. The purpose is to compute the position (X, Y, Z) of the magnet by using the data of the 4 sensors (far left, far right, near left, near right). A sensor have an output of 0..5 volts (without a magnetic field the output is 2.5 volt).
Now I want train a neuronal network to predict from the 4 sensor inputs the x, y, z coordinates of the magnet. But I don't have an idea of the correct neuronal network type (Multilayer Perceptron, Adaline, Hopfield and all the others) to use and the topology (how many layers and how many hidden neurons per layer).
Last week I took a measurement by using Lego building blocks to get an "exact" position of the magnet and saved the sensor data. You can find the measurement here:measurement03_xyz.csv. Here is an excerpt of the measurement at height/z = 10.6 mm
Lt.Far Rt.Far Lt.Near Rt.Near X Y Z
2.45357 2.43891 2.43891 2.52688 -16 -16 10.6
2.45846 2.46334 2.51222 2.6784 -8 -16 10.6
2.48289 2.46334 2.63441 2.68328 0 -16 10.6
2.49267 2.45357 2.69306 2.54643 8 -16 10.6
2.46334 2.43402 2.56598 2.48778 16 -16 10.6
2.46334 2.51222 2.46823 2.65396 -16 -8 10.6
2.51711 2.64907 2.62463 3.14272 -8 -8 10.6
2.69306 2.72239 3.15738 3.38221 0 -8 10.6
2.74194 2.56598 3.41642 2.84457 8 -8 10.6
2.58065 2.45846 2.77615 2.53666 16 -8 10.6
2.48289 2.62952 2.46823 2.69795 -16 0 10.6
2.66862 3.18671 2.66373 3.33822 -8 0 10.6
3.24536 3.4262 3.33822 3.57282 0 0 10.6
3.46041 2.83969 3.63148 2.90323 8 0 10.6
2.81525 2.51222 2.90811 2.54643 16 0 10.6
2.49267 2.65885 2.45357 2.57576 -16 8 10.6
2.69306 3.26979 2.54154 2.81036 -8 8 10.6
3.37732 3.57282 2.81525 2.93255 0 8 10.6
3.5826 2.88368 2.88368 2.65396 8 8 10.6
2.8739 2.51711 2.6393 2.49756 16 8 10.6
2.47312 2.55621 2.42913 2.50244 -16 16 10.6
2.56598 2.76637 2.46334 2.54154 -8 16 10.6
2.81036 2.84946 2.50733 2.55621 0 16 10.6
2.87879 2.63441 2.52199 2.51711 8 16 10.6
2.64907 2.47801 2.48778 2.47312 16 16 10.6
(first 4 columns are the sensor inputs in volts and the last 3 columns are the position in mm)
My first attempt was with Neurophstudio to create a Multilayer Perceptron network like this:
But when I begin with the training, the total network error goes very, very, very slowly down.
I hope that someone with experience in neuronal networks can give me an advice which network type and topology too choose.
In Addition: Here some graphs of the measurement:
-Voltage of the left, near sensor in volts:
-Voltage of the left, near sensor in volts in dependence of the distance to the fix point (X=-8mm, Y=0mm, Z=1mm) for all measurered points (left) and for only the points on the XZ-Plane at y=0mm:
An MLP should do. Make it as simple as you possibly can (but no simpler). Try only one hidden layer first, with as few neurons as possible, e.g. start with 4-5-3 (single hidden layer with 5 neurons). Train and validate the network, and only add more layers or neurons as long as they provide value.
Remember to separate your training set into training and validation sets or you will risk overfitting the data, especially if you choose a complex network. As long as you get lower errors in validation you can increase complexity. If you start seeing lower errors on the training set but no difference/higher errors on unseen data, you are overfitting and should go back to a simpler network.
Also consider normalizing your data as it helps training, e.g. invent a coordinate system and voltage reference system where both inputs and outputs are scaled to help the training algorithm converge. Here is a good article on normalization and encoding
I usually don't like it when people answer questions like: "how to use X to solve Z" with something like "why use X when you could use Y" so I apologize in advance for doing just that.
In your case, I would just triangulate the position of the magnet from the values of the 4 sensors in 4 groups of 3 with no need to train a neural network.
If you have your sensors labeled like this:
A o ---------------o B
|\ / |
| \ / |
| \ / |
| \ / |
| / \ |
| / \ |
| / \ |
|/ \ |
D o ---------------o C
Then you have the following 4 groups to use (A,B,C) (A,D,C) (D,A,B) and (B,C,D). And finding the X,Y,Z coordinates of the magnet reduces to solving the prism which is a geometric problem.
That said, if you are intent on using a neural network for the task you'll have to experiment with different configurations until you're happy with the results.
Make sure you have a really large training set.
Then create a separate cross-validation data set
The idea is to use the training set to train the network and the cross validation set to explore the effects of changing various parameters such as the topography of the neural network or learning coefficients etc.
How to get the first 32 bits and last 32 bits of a uint64, and save them to two uint32 variables, using low-level operations such as bitshift, and, xor...? It seems like an easy problem but Matlab has some limitations on bit manipulation (e.g. only support up to 53 bit).
You can typecast() it into 'uint32' and convert to binary:
x64 = uint64(43564);
x32 = typecast(x64,'uint32');
x32 =
43564 0
dec2bin(x32)
ans =
1010101000101100
0000000000000000
This is supplementary to #Oleg's correct answer, in response to #Ruofeng's comment.
By doing hex2dec you are converting to double which doesn't have enough precision to store your hex number aaaaaaaaaaaaaaaa exactly. If you stick to uint64 you are OK.
See http://www.mathworks.com/matlabcentral/fileexchange/26005-convert-a-number-in-hex-to-uint64/content/hex2uint64.m.
Then x64=hex2uint64('aaaaaaaaaaaaaaaa'); followed by Oleg's answer [i.e. x32 = typecast(x64,'uint32');] gives the two parts identical:
x32 =
2863311530 2863311530
My data-set includes 29 inputs and 6 outputs. When I use
net = newff(minmax(Pl),[14 12 8 6]);
to build my feed forward MLP network and train it by
net.trainParam.epochs=50;
net=train(net,Pl,Tl);
the network can not learn my data-set and its error does not decrease below 0.7, but when I use arguments of newff function like this:
net=newff(minmax(Pl),[14 12 8 6],{'tansig' 'tansig' 'tansig' 'purelin'},'trainlm');
the error is decreased very fast and it comes below 0.0001! The unusual note is that when I use the previous code using only one layer including 2 neurons:
net=newff(minmax(Pl),[2 6],{'tansig' 'purelin'},'trainlm');
the error is decreased below 0.2 again and it is doubtful!
Please give me some tips and help me to know what is difference between:
net = newff(minmax(Pl),[14 12 8 6]);
and
net=newff(minmax(Pl),[14 12 8 myANN.m],{'tansig' 'tansig' 'tansig' 'purelin'},'trainlm');
?
I think that the second argument to NEWFF (link requires login) is supposed to be the target vectors, not the size of hidden layers (which is the third argument).
Note that the default transfer function for hidden layers is tansig and for output layer is purelin, and the default training algorithm is trainlm.
Finally, remember that if you want to get reproducible results, you have to manually reset the random number generator to a fixed state at the beginning of each run.