This says that the Cauchy distribution is distribution of the ratio of two independent normally distributed random variables with mean zero.
The note says the Skewed Cauchy Distribution is a generalization of the Cauchy distribution. It has a single shape parameter -1 < a < 1 that skews the distribution. The special case a = 0 yields the Cauchy distribution.
Does Skewed Cauchy distribution mean the ratio of two independent normally distributed random variables with mean zero again? Or is this the ratio of two independent normally distributed random variables with non-zero mean?
Related
The question I have seems a bit unclear. Let me explain what I mean:
I'm trying to recreate a distribution from a paper in anylogic, which gives the mu and sigma^2 for a log-normal distribution EXPLICITLY:
when I model the mu = 12 and sigma^2=36 from the paper in the built-in AnyLogic function: lognormal(double mu, double sigma, double min) as lognormal(12.0, 6.0, 0.0) because it cannot be negative. The function however gives values in the several ten-thousands or even millions. I was expecting WAY lower values.
I know this for sure, because the value dictates customer required lead-times, and the model only runs for about 6000 hours.
I realized the text on the log-normal distributions AnyLogic help website: "sigma = The standard deviation of the included Normal."
What does that mean for the given distributions from the paper? Am I missing calculations between Normal and log-Normal distributions?
the distributions are generally very well documented if you do a google search. From wikipedia:
In probability theory, a log-normal (or lognormal) distribution is a
continuous probability distribution of a random variable whose
logarithm is normally distributed.
so the following statement
sigma = The standard deviation of the included Normal
Means exactly what it says... there's a random variable who follows a normal distribution, and this is the sigma they are talking about here... the sigma of the normally distributed variable.
To calculate the mean of a lognormal you do the following:
Mean = exp (mu + 0.5*sigma^2)
in your case, it would be exp(30)=1.068647e+13
So a pretty big number
many people think that normal and log-normal are the same for some reason, so either the paper is wrong or you are wrong, i don't have access to the paper to know
I have a correlation matrix for N random variables. Each of them is uniformly distributed within [0,1]. I am trying to simulate these random variables, how can I do that? Note N > 2. I was trying to using Cholesky Decomposition and below is my steps:
get the lower triangle of the correlation matrix (L=N*N)
independently sample 10000 times for each of the N uniformly distributed random variables (S=N*10000)
multiply the two: L*S, and this gives me correlated samples but the range of them is not within [0,1] anymore.
How can I solve the problem?
I know that if I only have 2 random variables I can do something like:
1*x1+sqrt(1-tho^2)*y1
to get my correlated sample y. But if you have more than two variables correlated, not sure what should I do.
You can get approximate solutions by generating correlated normals using the Cholesky factorization, then converting them to U(0,1)'s using the normal CDF. The solution is approximate because the normals have the desired correlation, but converting to uniforms is a non-linear transformation and only linear xforms preserve correlation.
There's a transformation available which will give exact solutions if the transformed Var/Cov matrix is positive semidefinite, but that's not always the case. See the abstract at https://www.tandfonline.com/doi/abs/10.1080/03610919908813578.
Consider a Normal distribution with mean 0 and standard deviation 1. I would like to divide this distribution into 9 regions of equal probability and take a random sample from each region.
It sounds like you want to find the values that divide the area under the probability distribution function into segments of equal probability. This can be done in matlab by applying the norminv function.
In your particular case:
segmentBounds = norminv(linspace(0,1,10),0,1)
Any two adjacent values of segmentBounds now describe the boundaries of segments of the Normal probability distribution function such that each segment contains one ninth of the total probability.
I'm not sure exactly what you mean by taking random numbers from each sample. One approach is to sample from each region by performing rejection sampling. In short, for each region bounded by x0 and x1, draw a sample from y = normrnd(0,1). If x0 < y < x1, keep it. Else discard it and repeat.
It's also possible that you intend to sample from these regions uniformly. To do this you can try rand(1)*(x1-x0) + x0. This will produce problems for the extreme quantiles, however, since the regions extend to +/- infinity.
I have a probability distribution that defines the probability of occurrence of n possible states.
I would like to calculate the value of Shannon's entropy, in bits, of the given probability distribution.
Can I use wentropy(x,'shannon') to get the value and if so where can I define the number of possible states a system has?
Since you already have the probability distribution, call it p, you can do the following formula for Shannon Entropy instead of using wentropy:
H = sum(-(p(p>0).*(log2(p(p>0)))));
This gives the entropy H in bits.
p must sum to 1.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I thought randn returns a random number which belongs to a normal distribution with mean 0 and standard deviation 1. Therefore, I expect to get a number in the range (0, 1). But what I get is a number not in the range (0,1).
What am I doing wrong?
You are thinking of a uniform distribution. A normal distribution can, in theory, have very big numbers, with very low probability.
randn has a mean of 0 and standard deviation of 1. The normal distribution is the bell-curve / Gaussian shape, with the highest probability at the mean and probability falling off relative to the standard deviation.
What you are looking for is rand, which "samples" from a uniform random distribution, which gives numbers bounded between 0 and 1 with even probability at all points.
You're confusing the normal distribution with the uniform distribution.
Another possible source of confusion:
A normal distribution with mean 0 and variance 1 is often denoted N(0,1). This is sometimes called the standard normal distribution and implies that samples are drawn from all real numbers, i.e., the range (−∞,+∞), with a mean 0 and variance 1. The standard deviation is also 1 in this case, but this notation specifies the variance (many screw this up). The transformation N(μ,σ2) = μ + σ N(0,1), where μ is the mean, σ2 is the variance, and σ is the standard deviation, is very useful.
Similarly, a continuous uniform distribution over the open interval (0,1) is often denoted U(0,1). This is often called a standard uniform distribution and implies that samples are drawn uniformly from just the range (0,1). Similarly, the transformation U(a,b) = a + (b − a) U(0,1), where a and b represent the edges of a scaled interval, is useful.
Note that the 0's and 1's in these two cases do not represent the same things at all other than being parameters that describe each distribution. The ranges that these two distributions are sampled from are called the support.