Can NN predict the pressure by velocity instead of solving the Poisson equation? - neural-network

In imcompressible CFD code based on Chorin's projection method, the velocity is divided into 2 parts:
$\Delta u=\Delta u^*+\Delta u' $, where $\Delta u^*=\nu \nabla^2 v+f$. In classic method, pressure #p# would be solved by a Poisson equation, then the $u'$ would be calculated by its gradient.
In my code, a simple NN model is employed to predict the pressure by $u^*$, and it works approximately. However, the divergence of the velocity $\nabla \cdot \Delta u$, which should be zero in the whole flow area, could not keep zero.
I think it must be the problem of my simple NN model, and I am just wondering that can NN predict the velocity directly with the condition $\nabla \cdot \Delta u = 0$?

If your neural network does not learn to output divergence-free velocity fields, then obviously it will not do so. You could enforce the divergence free condition in the loss function of the neural network to try to achieve this.

Related

Why doesn't this LQG controller work in simulink?

I am trying to simulate a LQG controller for CART wind turbine. I have read an article that has done this and checked the calculations and theory step by step.
This is the model:
In which A,B,C are state space matrices, Kf feedback gain, Kk kalman gain, w process noise and v is measurement noise which are gaussian noise.
The corresponding block diagram is:
My simulink simulation:
initial conditions are assumed to be zero and it is set in integrator block. The states are x1 rotor speed, x2 drive train torsion spring force, x3 generator speed.
The operating work at which the linear turbine model is obtained is as
below so I set input to 42rpm:
The output should be like this:
but what I get is this:
I can't find out why. Is it a problem with simulink because I have done the same thing as the article did or am I doing something wrong?

Calculate effective diffusion coefficient

I need to compute the drift velocity
( v=d/dt[r(t)] ) and the effective diffusion coefficient
(Deff=d/dt[r(t)^2]-d/dt[r(t)]^2 ) from random trajectories for the case of Brownian motion over a periodic potential.
As a mere example assume I have an ensemble of random trajectories:
dt=1e-2; N=1e6; Ensemble=200; Do=1;
wn=sqrt(2*Do*dt)*normrnd(0,1,[Ensemble,N]);
time=0:dt:N*dt;
I first compute the drift velocity:
P2 = polyfit(time,mean(wn(:,:)-wn(:,1)),1);
vx_Sim=P2(1);
which gives me the expected value of the analytic solution.
Then I compute the effective diffusion like:
XM=mean((wn(:,:)-wn(:,1)).^2,1)/(2*Do);
P =polyfit(time,sqrt(XM),1);
DDeffSim=P(1);
yet I don't get back the expected result from the analytic solutions for the particular Brownian motion I'm studying. Am I calculating this wrong?
so the effective diffusion coefficient is related to the variance of my ensemble of vectors, so I used the Matlab function var() to compute the diffusion.
DDeffsim=mean(var(wn).')'./(2*dt)/NT;

Function approximation by ANN

So I have something like this,
y=l3*[sin(theta1)*cos(theta2)*cos(theta3)+cos(theta1)*sin(theta2)*cos(theta3)-sin(theta1)*sin(theta2)*sin(theta3)+cos(theta1)*cos(theta2)sin(theta3)]+l2[sin(theta1)*cos(theta2)+cos(theta1)*sin(theta2)]+l1*sin(theta1)+l0;
and something similar for x. Where thetai is angles from specified interval and li some coeficients. Task is approximate inversion of equation, so you set x and y and result will be appropriate theta. So I random generate thetas from specified intervals, compute x and y. Then I norm x and y between <-1,1> and thetas between <0,1>. This data I used as training set in such way, inputs of network are normalized x and y, outputs are normalized thetas.
I train the network, tried different configuration and absolute error of network was still around 24.9% after whole night of training. It's so much, so I don't know what to do.
Bigger training set?
Bigger network?
Experiment with learning rate?
Longer training?
Technical info
As training algorithm was used error back propagation. Neurons have sigmoid activation function, units are biased. I tried topology: [2 50 3], [2 100 50 3], training set has length 1000 and training duration was 1000 cycle(in one cycle I go through all dataset). Learning rate has value 0.2.
Error of approximation was computed as
sum of abs(desired_output - reached_output)/dataset_lenght.
Used optimizer is stochastic gradient descent.
Loss function,
1/2 (desired-reached)^2
Network was realized in my Matlab template for NN. I know that is weak point, but I'm sure my template is right because(successful solution of XOR problem, approximation of differential equations, approximation of state regulator). But I show this template, because this information may be useful.
Neuron class
Network class
EDIT:
I used 2500 unique data within theta ranges.
theta1<0, 180>, theta2<-130, 130>, theta3<-150, 150>
I also experiment with larger dataset, but accuracy doesn't improve.

fitness in inverted pendulum

What is the fitness function used to solve an inverted pendulum ?
I am evolving neural networks with genetic algorithm. And I don't know how to evaluate each individual.
I tried minimize the angle of pendulum and maximize distance traveled at the end of evaluation time (10 s), but this won't work.
inputs for neural network are: cart velocity, cart position, pendulum angular velocity and pendulum angle at time (t). The output is the force applied at time (t+1)
thanks in advance.
I found this paper which lists their objective function as being:
Defined as:
where "Xmax = 1.0, thetaMax = pi/6, _X'max = 1.0, theta'Max =
3.0, N is the number of iteration steps, T = 0.02 * TS and Wk are selected positive weights." (Using specific values for angles, velocities, and positions from the paper, however, you will want to use your own values depending on the boundary conditions of your pendulum).
The paper also states "The first and second terms determine the accumulated sum of
normalised absolute deviations of X1 and X3 from zero and the third term when minimised, maximises the survival time."
That should be more than enough to get started with, but i HIGHLY recommend you read the whole paper. Its a great read and i found it quite educational.
You can make your own fitness function, but i think the idea of using a position, velocity, angle, and the rate of change of the angle the pendulum is a good idea for the fitness function. You can, however, choose to use those variables in very different ways than the way the author of the paper chose to model their function.
It wouldn't hurt to read up on harmonic oscillators either. They take the general form:
mx" + Bx' -kx = Acos(w*t)
(where B, or A may be 0 depending on whether or not the oscillator is damped/undamped or driven/undriven respectively).

Matlab: Determinant of VarianceCovariance matrix

When solving the log likelihood expression for autoregressive models, I cam across the variance covariance matrix Tau given under slide 9 Parameter estimation of time series tutorial. Now, in order to use
fminsearch
to maximize the likelihood function expression, I need to express the likelihood function where the variance covariance matrix arises. Can somebody please show with an example how I can implement (determinant of Gamma)^-1/2 ? Any other example apart from autoregressive model will also do.
How about sqrt(det(Gamma)) for the sqrt-determinant and inv(Gamma) for inverse?
But if you do not want to implement it yourself you can look at yulewalkerarestimator
UPD: For estimation of autocovariance matrix use xcov
also, this topic is a bit more explained here