VRP : Arc cost based on current vehicle load (CO2 Emissions) - or-tools

I would like to factor in the CO2 emission in the Vehicle Routing Problem. The Arc Cost is not the distance anymore but the CO2 emission over each Arc
As a first order approximation, the CO2 emission of each arc can be modeled as a linear function of the current load and constant topographic characteristics of the arc (the slope, distance, altitude, ...)
arc_cost(arc,load) = K(arc) + alpha(arc)*load
K and alpha are functions of the arc only (stateless)
The load depends on the previous deliveries.
I haven't found how to model this with the ORTools Python API. Here is my current attempt : https://github.com/remisoulignac/scm_optim_problems/blob/main/SCM290-GreenVehicleRoutingProblem.ipynb
For now, I fall back on a two-level optimization approach:
A classical VRP optimization model. I parameterize this model by several control points at which I impose a certain weight of goods to be delivered to the chosen route.
A hyperparametric optimization level that will play with the different control points and optimize the overall real fuel consumption of the tour.
Thank you for your help,

Related

How to add or delete fluid in LBM(lattice boltzmann method)

LBM focuses on fluid clusters, and uses the macro fluid density and velocity to calculate the equilibrium distribution function, and then uses the evolution equation to achieve system iteration. But if we add the same fluid to the lattice grid points in the LBM or reduce the existing fluid continuously, how should we recalculate the macro fluid density and velocity? Or how should the distribution function at the lattice grid point be recalculated? Can LBM simulate a scenario where fluid is continuously added or reduced to the system? For example, water keeps flowing from the tap.
The traditional lattice-Boltzmann method (e.g. the D2Q9 lattice in 2D) can only be applied to incompressible flows. Put in simple terms this means that there can't be more mass entering the domain than exiting it: The mass inside the domain is roughly the same throughout the simulation. This simplification of the generally compressible Navier-Stokes equations can not only be applied to incompressible fluids (such as water) but also to low-Mach number flows like the flow around a car (for more details see here). Yet the traditional lattice-Boltzmann method can't describe multi-phase and free-surface flows as well as flows with sinks and sources (which all result in a change of density of the system).
Any inlet or outlet conditions in the incompressible lattice-Boltzmann method falls in one of the following categories:
Periodic boundaries (the populations that exit the domain on one side enter it again on the other side)
Pressure-drop-periodic boundaries (such as Zhang/Kwok) for periodic flow but with an additional term for compensating for a pressure drop inside the domain due to friction
Velocity and pressure boundaries (generally a velocity inlet and a pressure outlet): There exist various formulations of these to make sure that the moments of the distribution are actually conserved and they have different characteristics regarding numeric stability. Most of them enforce some sort of symmetry and extrapolation of higher moments. The simplest ones are the ones by Zou/He but others like Guo's extrapolation method are significantly more stable for under-resolved and turbulent (high Reynolds number flows). This review discusses different ones in more detail.
You can have a look at this small code I have written in C++ for 2D and 3D simulations if you are interested in more details on how this actually works.
That being said there exist though several variations of lattice-Boltzmann methods in research that allow for multi-component or multi-phase flows (e.g. by introducing additional distributions) or compressible flows (with lattices with more discrete velocities and potentially a second lattice) but they are still exotics and you won't find many implementations around.

Pressure drop in Modelica.Fluid.Pipes.DynamicPipe when dynamic momentum balance is taken into account

I have a problem in understanding the simulation results of a discretized Modelica.Fluid.Pipes.DynamicPipe when using a compressible gas as medium and taking the dynamic momentum balance into account.
To illustrate that I built up a very simple model: pressure source + pipe + pressure sink. The pressure in the pressure source is linearly increased over time. The parameterization of the pipe mainly corresponds to the default values, but the parameter "momentumDynamics" is set to "Modelica.Fluid.Types.Dynamics.FixedInitial".
For lower gas velocities (=smaller inlet pressures) the pressure drop is somehow nearly linear distributed over the discrete elements of the pipe (of course the pressure drop is not the same in every element due to the change in medium properties). As the gas velocity gets higher however, the pressure drop in the last flow model (= resistive element) is dominating by far. The picture below showes the pressures in the different flow models along the pipe. The pressure in the last flow model (green dashed line) corresponds to the constant pressure in the pressure sink.
Actually when looking at the pressure distribution along the pipe it looks as if the pipe was choking. This is however not possible since the velocities are still far below the velocity of sound. The velocity in the last flow model is a lot higher than in the rest of the pipe, because the pressure is a lot lower, since it corresponds to atmospheric pressure. This picture below shows the velocity in the flow models in the pipes as well as the velocities of sound. The velocities of sound are nearly constant at ~330 m/s.
What I do not understand:
Does the simulation result represents the physics correctly? If no, where is the "error" in the equations? If yes, what is the the physical behavior which the model represents here?
What I've tried:
Changing the discretization of the pipe does not change the phenomenom.
It seems to be independent of the medium model, I've also tried it with quite different medium models for compressible gas. (This example shown is using Modelica.Media.Air.ReferenceAir.Air_ph)
It only occurs if the dynamic momentum balance is chosen (since despite the name of this flag, using this flag not only "activates" the dynamic term in the momentum balance but also adds the pressure loss due to acceleration).
I'm looking forward to any hints to explain this issue!

Self Organized Maps (SOM) neighborhood and weight updates

I am studying Self Organized Map (SOM) in the Neuron Nets field. So I have 2 questions:
1) Why neighborhood size is decreasing?
2) why not update just the winner? What would happen in this case?
Thanks in advance
The power of the SOM is to create a neural network that will be displayable and human readable, so:
1) The neighborhood size is decreased in order to get some stability in the algorithm increasing the iterations.
2) The mean of updating also the neighborhood is to create the map (that will be displayable) where near units have similar weights. If you update only the winner unit, the map will be not created since similar units will be left scattered in the map.

What is the structure of an indirect (error-state) Kalman filter and how are the error equations derived?

I have been trying to implement a navigation system for a robot that uses an Inertial Measurement Unit (IMU) and camera observations of known landmarks in order to localise itself in its environment. I have chosen the indirect-feedback Kalman Filter (a.k.a. Error-State Kalman Filter, ESKF) to do this. I have also had some success with an Extended KF.
I have read many texts and the two I am using to implement the ESKF are "Quaternion kinematics for the error-state KF" and "A Kalman Filter-based Algorithm for IMU-Camera Calibration" (pay-walled paper, google-able).
I am using the first text because it better describes the structure of the ESKF, and the second because it includes details about the vision measurement model. In my question I will be using the terminology from the first text: 'nominal state', 'error state' and 'true state'; which refer to the IMU integrator, Kalman Filter, and the composition of the two (nominal minus errors).
The diagram below shows the structure of my ESKF implemented in Matlab/Simulink; in case you are not familiar with Simulink I will briefly explain the diagram. The green section is the Nominal State integrator, the blue section is the ESKF, and the red section is the sum of the nominal and error states. The 'RT' blocks are 'Rate Transitions' which can be ignored.
My first question: Is this structure correct?
My second question: How are the error-state equations for the measurement models derived?
In my case I have tried using the measurement model of the second text, but it did not work.
Kind Regards,
Your block diagram combines two indirect methods for bringing IMU data into a KF:
You have an external IMU integrator (in green, labelled "INS", sometimes called the mechanization, and described by you as the "nominal state", but I've also seen it called the "reference state"). This method freely integrates the IMU externally to the KF and is usually chosen so you can do this integration at a different (much higher) rate than the KF predict/update step (the indirect form). Historically I think this was popular because the KF is generally the computationally expensive part.
You have also fed your IMU into the KF block as u, which I am assuming is the "command" input to the KF. This is an alternative to the external integrator. In a direct KF you would treat your IMU data as measurements. In order to do that, the IMU would have to model (position, velocity, and) acceleration and (orientation and) angular velocity: Otherwise there is no possible H such that Hx can produce estimated IMU output terms). If you instead feed your IMU measurements in as a command, your predict step can simply act as an integrator, so you only have to model as far as velocity and orientation.
You should pick only one of those options. I think the second one is easier to understand, but it is closer to a direct Kalman filter, and requires you to predict/update for every IMU sample, rather than at the (I assume) slower camera framerate.
Regarding measurement equations for version (1), in any KF you can only predict things you can know from your state. The KF state in this case is a vector of error terms, and thus you can only predict things like "position error". As a result you need to pre-condition your measurements in z to be position errors. So make your measurement the difference between your "estimated true state" and your position from "noisy camera observations". This exact idea may be represented by the xHat input to the indirect KF. I don't know anything about the MATLAB/Simulink stuff going on there.
Regarding real-world considerations for the summing block (in red) I refer you to another answer about indirect Kalman filters.
Q1) Your SIMULINK model looks to be appropriate. Let me shed some light on quaternion mechanization based KF's which I've worked on for navigation applications.
Since Kalman Filter is an elegant mathematical technique which borrows from the science of stochastics and measurement, it can help you reduce the noise from the system without the need for elaborately modeling the noise.
All KF systems start with some preliminary understanding of the model that you want to make free of noise. The measurements are fed back to evolve the states better (the measurement equation Y = CX). In your case, the states that you are talking about are errors in quartenions which would be the 4 values, dq1, dq2, dq3, dq4.
KF working well in your application would accurately determine the attitude/orientation of the device by controlling the error around the quaternion. The quaternions are spatial orientation of any body, understood using a scalar and a vector, more specifically an angle and an axis.
The error equations that you are talking about are covariances which contribute to Kalman Gain. The covariances denote spread around the mean and they are useful in understanding how the central/ average behavior of the system is changing with time. Low covariances denote less deviation from the mean behavior for any system. As KF cycles run the covariances keep getting smaller.
The Kalman Gain is finally used to compensate for the error between the estimates of the measurements and the actual measurements that are coming in from the camera.
Again, this elegant technique first ensures that the error in the quaternion values converge around zero.
Q2) EKF is a great technique to use as long as you have a non-linear measurement construction technique. Be very careful in using EKF if their are too many transformations in your system, i.e don't try to reconstruct measurements using transformation on your states, this seriously affects the model sanctity and since noise covariances would not undergo similar transformations, there would be a chance of hitting singularity as soon as matrices are non-invertible.
You could look at constant gain KF schemes, which would save you from covariance propagation and save substantial computation effort and time. These techniques are quite new and look very promising. They actively absorb P(error covariance), Q(model noise covariance) and R(measurement noise covariance) and work well with EKF schemes.

particle swarm optimization inertia factor

i am reading in soft computing algorithms ,currently in "Particle Swarm Optimization ",i understand the technique in general but ,i stopped at mathematical or physics part which i can't imagine or understand how it works or how it affect the flying,that part is the first part in the equation which update the velocity which is called the "Inertia Factor"
the complete update velocity equation is :
i read in one article in section 2.3 "Ineteria Factor" that:
"This variation of the algorithm aims to balance two possible PSO tendencies (de-
pendent on parameterization) of either exploiting areas around known solutions
or explore new areas of the search space. To do so this variation focuses on the
momentum component of the particles' velocity equation 2. Notice that if you
remove this component the movement of the particle has no memory of the pre-
vious direction of movement and it will always explore close to a found solution.
On the other hand if the velocity component is used, or even multiplied by a w
(inertial weight, balances the importance of the momentum component) factor
the particle will tend to explore new areas of the search space since it cannot
easily change its velocity towards the best solutions. It must rst \counteract"
the momentum previously gained, in doing so it enables the exploration of new
areas with the time \spend counteracting" the previous momentum. This vari-
ation is achieved by multiplying the previous velocity component with a weight
value, w."
the full pdf at: https://www.google.com.eg/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CDIQFjAA&url=http%3A%2F%2Fweb.ist.utl.pt%2F~gdgp%2FVA%2Fdata%2Fpso.pdf&ei=0HwrUaHBOYrItQbwwIDoDw&usg=AFQjCNH8vChXHXWz_ydHxJKAY0cUa94n-g
but i can't also imagine how physicaly or numerically this is happend and how this factor affect going from exploration level to exploitative level ,so need a numerical example to see how it's work and imagine how it's work.
also ,in Genetic Algorithm there's a schema theorem which is a proof of GA success of finding optimum solution,is there's such athoerm for PSO.
It's not easy to explain PSO using mathematics (see Wikipedia article for example).
But you can think like this: the equation has 3 parts:
particle speed = inertia + local memory + global memory
So you control the 'importance' of these components by varying the coefficientes in each part.
There's no analytical way to see this, unless you make the stocastic part constant and ignore things like particle-particle interation.
Exploit: take advantage of the best know solutions (local and global).
Explore: search in new directions, but don't ignore the best know solutions.
In a nutshell, you can control how much importance to give for the particle current speed (inertia), the particle memory of the best know solution, and the particle memory of the swarm best know solution.
I hope it can help you!
Br's
Inertia was not the part of the original PSO algorithm introduced by Kennedy and Eberhart in 1995. It's been three years until Shi and Eberhart published this extension and showed (to some extent) that it works better.
One can set that value to a constant (supposedly [0.8 to 1.2] is best).
However, the point of the parameter is to balance exploitation and exploration of space, and
authors got best results when they defined the parameter with a linear function, that decreases over time from [1.4 to 0].
Their rationale was that first one should exploit solutions to find a good seed and later exploit area around the seed.
My feeling about it is that the closer you are to 0, the more chaotic turns particles make.
For a detailed answer refer to Shi, Eberhart 1998 - "A modified Particle Swarm Optimizer".
Inertia controls the influence of the previous velocity.
When high, cognitive and social components are less relevant. (particle keeps going its way, exploring new portions of the space)
When low, particle explores better the space where the best-so-far optimum has been found
Inertia can change over time: Start high, later decrease