Is it possible to plot two ranges which are far apart each other?
I mean, if I have a dataset like [ 1, 2, 3, 1001, 1001, 1003 ],
can I draw a plot like this?
|
1003 | x
1002 | x
1001 | x
1000 |
|
===================== omission
|
4 |
3 | x
2 | x
1 | x
-------------
You may want to check out this link:
Gnuplot surprising - Broken axes graph in gnuplot. The author presents three examples of plotting a grqph with a broken x axis.
Three helpful examples:
http://gnuplot-surprising.blogspot.com/2011/10/broken-axes-graph-in-gnuplot-3.html
http://www.phyast.pitt.edu/~zov1/gnuplot/html/broken.html
http://www.phyast.pitt.edu/~zov1/
It is not straightforward.
Related
I have a csv file contains some data, I want select the similar data with an input.
my data is like:
H1 | H2 | H3
--------+---------+----------
A | 1 | 7
B | 5 | 3
C | 7 | 2
And the data point that I want find data similar to that in my csv is like : [6, 8].
Actually I want find rows that H2 and H3 of data set is similar to input, and It return H1.
I want use pyspark and some similarity measure like Euclidean Distance, Manhattan Distance, Cosine Similarity or machine learning algorithm.
I'm trying to figure out how the Graphite summarize function works. I've the following data points, where X-axis represents time, and Y-axis duration in ms.
+-------+------+
| X | Y |
+-------+------+
| 10:20 | 0 |
| 10:30 | 1585 |
| 10:40 | 356 |
| 10:50 | 0 |
+-------+------+
When I pick any time window on Grafana more than or equal to 2 hours (why?), and apply summarize('1h', avg, false), I get a triangle starting at (9:00, 0) and ending at (11:00, 0), with the peak at (10:00, 324).
A formula that a colleague came up with to explain the above observation is as follows.
Let:
a = Number of data points for a peak, in this case 4.
b = Number of non-zero data points, in this case 2.
Then avg = sum / (a + b). It produces (1585+356) / 6 = 324 but doesn't match with the definition of any mean I know of. What is the math behind this?
Your data is at 10 minute intervals, so there are 6 points in each 1hr period. Graphite will simply take the sum of the non-null values in each period divided by the count (standard average). If you look at the raw series you'll likely find that there are also zero values at 10:00 and 10:10
I need to calculate the Z-Index (Morton) of a point on a plane from its 2 coordinates x, y.
Traditionally this is just solved by the bit interleaving.
However I have boundaries, and I want the z-index of the point to only increase the morton count when it's inside the active area, and skip the count when outside.
To be clear, the typical z order in a 4x4 square is:
| 0 1 4 5 |
| 2 3 6 7 |
| 8 9 12 13 |
| 10 11 14 15 |
However if I have a 3x3 active area, I want the index to be calculated like this:
| 0 1 4 x |
| 2 3 5 x |
| 6 7 8 x |
| x x x x |
As you can see the 00-11 quad is full, the 02-13 is skipping the count for the 2 points that fall outside of the active area, same for 20-31, and for 22-33.
Important: I want to do this without iterating.
Is there a known solution for this problem?
I was able to get answer for the question on https://fgiesen.wordpress.com/2009/12/13/decoding-morton-codes/
To handle rectangular regions, round up all dimensions to the nearest power of 2 and pack major axis linearly.
For example, endcoding point (2,3) in 5x4 rectangle as follows,
Rounding up 5x4 to nearest power of 2 results in 8x4 i.e. 3 and 2 bits
Encoding point 2,3
First interleave 2bits of 0b010, 0b11 we get 0b1110, and 3rd bit from x dimension becomes 5th bit of result.
Encoding 4,2,
0b100, 0b11 becomes 0b11010
In order to find z-order of 3x3 region, find inverse mapping for 4x4 region using above reverse of above method
while generating map skip any points that fall outside 3x3 region.
mapping would look like
(0,0) -> (0,0)
(0,1) -> (1,0)
(0,2) -> (0,1)
(0,3) -> (1,1)
(1,0) -> (2,0)
(1,2) -> (2,1)
(2,0) -> (0,2)
(2,1) -> (1,2)
(3,0) -> (2,2)
python code might be useful, https://gist.github.com/kannaiah/4eb936b047a987b32555b2642a0979f7
I'm trying to simulate the following distribution:
a | 0 | 1 | 7 | 11 | 13
-----------------------------------------
p(a) | 0.34 | 0.02 | 0.24 | 0.29 | 0.11
I already simulated a similar problem: four type of balls with chances of 0.3, 0.1, 0.4 and 0.2. I created a vector F = [0 0.3 0.4 0.8 1] and used repmat to grow it by 1000 rows. Then I compared it with a columnvector of 1000 random numbers grown with 5 columns using the same repmat approach. I compared those two, calculated the sumvector of the matrix, and calculated the difference to get the frequences (e.g. [301 117 386 196]). .
But with the current distribution I don't know how to create the initial matrix F and whether I can use the same approach I used before at all.
I need the answer to be "vectorised", so no (for, while or if) loops.
This question on math.stackexchange
What if you create the following arrays:
largeNumber = 1000000;
a=repmat( [0], 1, largeNumber*0.34 );
b=repmat( [1], 1, largeNumber*0.02 );
% ...
e=repmat( [13], 1, largeNumber*0.11 );
Then you concatenate all of these arrays (to get a single array where your entries are represented with their corresponding probabilities), shuffle them, and extract the first N elements to get an N-dimensional vector drawn from your distribution.
EDIT: of course this answer is the way to go.
I have a system of equations with 100001 variables (x1 through x100000 and alpha) and exactly that many equations. Is there a computationally efficient way, in Matlab or otherwise, to solve this system of equations. I know of the solve() command, but I'm wondering if there is something that will run faster. The equations are of the form:
1.) -x1 + alpha * (x4 + x872 + x9932) = 0
.
.
.
100000.) -x100000 + alpha * (x38772 + x95) = 0
In other words, the i^th equation has variable xi with coefficient -1 added to alpha * (sum of some other variables) equals 0. The final equation is just that x1 + ... + x100000 = 1.
The Math Part
This system may be always brought to the eigen[value/vector] equation canonical form:
**A***x* = λx
where A is your system's matrix, and x = [x1; x2; ...; x100000]. Taking the example from this question, the system may be written down as:
/ \ / \ / \
| 0 1 0 0 0 | | x1 | | x1 |
| 0 0 1 0 1 | | x2 | | x2 |
| 1 0 0 0 0 | x | x3 | = (1/alpha) | x3 |
| 0 0 1 0 0 | | x4 | | x4 |
| 0 1 0 1 0 | | x5 | | x5 |
\ / \ / \ /
This means that your eigenvalues λ = 1/α. Of course, you should beware of complex eigenvalues (unless you really want to take them into account).
The Matlab part
Well this is much to your taste and skills. You can always find the eigenvalues of a matrix with eig(). Better to use sparse matrices (memory economy):
N = 100000;
A = sparse(N,N);
% Here's your code to set A's values
A_lambda = eig(A);
ieps= 0e-6; % below this threshold imaginary part is considered null
alpha = real(1 ./ (A_lambda(arrayfun(#(x) imag(x)<ieps, A_lambda)))); % Chose Real. Choose Life. Choose a job. Choose a career. Choose a family. Choose a f****** big television, choose washing machines, cars, compact disc players and electrical tin openers. Choose good health, low cholesterol, and dental insurance. Choose fixed interest mortgage repayments. Choose a starter home. Choose your friends. Choose leisurewear and matching luggage. Choose a three-piece suit on hire purchase in a range of f****** fabrics. Choose DIY and wondering who the f*** you are on a Sunday morning. Choose sitting on that couch watching mind-numbing, spirit-crushing game shows, stuffing f****** junk food into your mouth. Choose rotting away at the end of it all, pissing your last in a miserable home, nothing more than an embarrassment to the selfish, f***** up brats you spawned to replace yourself. Chose life.
% Now do your stuff with alpha here
But, mind this: numerically solving large eigenvalue equations might give you complex values where real are expected. Tweak your ieps to sensible values, if you don't find anything in the beginning.
To find the eigenvectors, just take one out of the system and solve for the rest by means of Cramer's rule. The norm them to one if you wish so.