Battery management using MATLAB Simulink and Stateflow - matlab

I am designing a battery model and its control to undergo cyclic charge and discharge.
The battery model is created by using a simscape electrical battery block (Table-based).
The control is modeled using Stateflow.
The stateflow chart takes SOC values as inputs and provides current values as outputs. By default, the battery will be at rest and no current is drawn at that state (I=0A). And then based on the SOC % of the battery, it goes to charge (3A current) or discharge (-3A current) state. I have defined the controls as follows.
- If the battery has SOC >= 50%, it has to discharge.
- If the battery has SOC < 50%, it has to charge.
- While discharging, if the battery reaches 0% SOC, it goes to rest.
- While charging, if the battery reaches 100% SOC, it goes to rest.
I have defined the initial SOC as 50%.
When I run the simulation, the battery started to discharge as per the condition provided in the stateflow chart.
I = -3
But the battery has not come to rest after reaching 0% S0C.
I am getting a warning that,
*At time 1944.017100, one or more assertions are triggered. State of charge must be greater than or equal to zero. The assertion comes from: Block path: Example_cell_model/Battery (Table-Based)1Assert location: o (location information is protected)*
I don't understand why the battery has not come back to rest.
Does anyone have any idea of [enter image description here][1]the cause of this problem and how to resolve it?
[Battery model][1]
[Stateflow chart][2]
[Simulation result][3]
[Warning message][4]
[1]: https://i.stack.imgur.com/TFstb.png
[2]: https://i.stack.imgur.com/PbtuD.png
[3]: https://i.stack.imgur.com/kUAP4.png
[4]: https://i.stack.imgur.com/njLQy.png
Thanks in advance!

I'll be honest here, i'm not really someone who's in this topic by any margin but, i think you might want to rethink the conditions for charge/dischare.
SOC >= 0.5 || SOC == 1
seems redundant to me for one, but the culprit might be the condition for the transition from discharge to rest state.
SOC == 0.001
It's never really a good idea to look for exact equivalencies in floating point numbers. Try going for SOC <= 0.01.
The same applies with charge to rest transition.
Hope i'm not talking complete nonsense here.

Related

Observation Space for race strategy development - Reinforcement learning

I refrained from asking for help until now, but as my thesis' deadline creeps ever closer and I do not know anybody with experience in RL, I'm trying my luck here.
TLDR;
I have not found an academic/online resource which helps me understand the correct representation of the environment as an observation space. I would be very thankful for any links or for giving me a starting point of how to model the specifics of my environment in an observation space.
Short thematic introduction
The goal of my research is to determine the viability of RL for strategy development in motorsports. This is currently achieved by simulating (lots of!) races and calculating the resulting race time (thus end-position) of different strategic decisions (which are the timing of pit stops + amount of laps to refuel for). This demands a manual input of expected inlaps (the lap a pit stop occurs) for all participants, which implicitly limits the possible strategies by human imagination as well as the amount of possible simulations.
Use of RL
A trained RL agent could decide on its own when to perform a pit stop and how much fuel should be added, in order to minizime the race time and react to probabilistic events in the simulation.
The action space is discrete(4) and represents the options to continue, pit and refuel for 2,4,6 laps respectively.
Problem
The observation space is of POMDP nature and needs to model the agent's current race position (which I hope is enough?). How would I implement the observation space accordingly?
The training is performed using OpenAI's Gym framework, but a general explanation/link to article/publication would also be appreciated very much!
Your observation could be just an integer which represents round or position the agent is in. This is obviously not a sufficient representation so you need to add more information.
A better observation could be the agents race position x1, the round the agent is in x2 and the current fuel in the tank x3. All three of these can be represented by a real number. Then you can create your observation by concating these to a vector obs = [x1, x2, x3].

Alert in RAM/CPU Usage Detection in e-Commerce Server

Currently I'm building my monitoring services for my e-commerce Server, which mostly focus on CPU/RAM usage. It's likely Anomaly Detection on Timeseries data.
My approach is building LSTM Neural Network to predict next CPU/RAM value on chart trending and compare with STD (standard deviation) value multiply with some number (currently is 10)
But in real life conditions, it depends on many differents conditions, such as:
1- Maintainance Time (in this time "anomaly" is not "anomaly")
2- Sales time in day-off events, holidays, etc., RAM/CPU usages increase is normal, of courses
3- If percentages of CPU/RAM decrement are the same over 3 observations: 5 mins, 10 mins & 15 mins -> Anomaly. But if 5 mins decreased 50%, but 10 mins it didn't decrease too much (-5% ~ +5%) -> Not an "anomaly".
Currently I detect anomaly on formular likes this:
isAlert = (Diff5m >= 10 && Diff10m >= 15 && Diff30m >= 40)
where Diff is Different Percentage in Absolute value.
Unfortunately I don't save my "pure" data for building neural network, for example, when it detects anomaly, I modified that it is not an anomaly anymore.
I would like to add some attributes to my input for model, such as isMaintenance, isPromotion, isHoliday, etc. but sometimes it leads to overfitting.
I also want to my NN can adjust baseline over the time, for example, when my Service is more popular, etc.
There are any hints on these aims?
Thanks
I would say that an anomaly is an unusual outcome, i.e. a outcome that's not expected given the inputs. As you've figured out, there are a few variables that are expected to influence CPU and RAM usage. So why not feed those to the network? That's the whole point of Machine Learning. Your network will make a prediction of CPU usage, taking into account the sales volume, whether there is (or was) a maintenance window, etc.
Note that you probably don't need an isPromotion input if you include actual sales volumes. The former is a discrete input, and only captures a fraction of the information present in the totalSales input
Machine Learning definitely needs data. If you threw that away, you'll have to restart capturing it. As for adjusting the baseline, you can achieve that by overweighting recent input data.

Matlab Simulink: while loop with subtraction

I am hoping somebody here will be able to help me out with my small issue with one of the Simulink/Matlab code. It is quite similar to the problem that I’ve discussed earlier, but a little bit more complicated and now it is more a Simulink issue, rather than a Matlab one.
So I have a turbine which speed is controlled by the gate’s opening, hence the control voltage. By controlling the gate opening I am accelerating the turbine and at some point in time, I need to introduce a saturation effect (since I am testing the code now, it will be done an external signal). This effect won’t change the control voltage, but it affects other components of the system, hence at the same control voltage, the turbine’s speed will go up. But at the same time, I need to keep the speed at the same value as it was before the saturation effect (let’s say it was 320 rpm). To do so I need to decrease the control voltage and should keep doing it until I reach the speed as it was before. There is no need to do it instantly (this approach will be later introduced in hardware), but it will be a nice thing to check the algorithm in these synthetic tests.
In terms of the model, I was planning to use a while loop with the speed requirement “if speed > 320” again, now just to simplify things. To decrease the control voltage I was planning to subtract from the original 50 (% opening) - 0.25 (u2) at first and after that increasing this value by 0.25 until I decrease the speed below 320. I can’t know the exact opening when this requirement will be satisfied, hence I need some kind of algorithm to “track” this voltage.
So it should be something like this:
u2 = 0;
While speed > 320
u2 = u2+0.25
End
u2 is initially zero since we have a predefined initial control voltage. And obviously, when we reach the motor’s speed below 320, I need to keep the latest value of the u2 (and control voltage).
Overall, it is a small code and should be done in Simulink (don’t want to introduce any other Fcn function into the model). I’ve never used while and if blocks in Simulink, but so far I came up with this system. It’s a simplified version of my model, but the control principle is the same.
We are getting the motor speed of 350, compared with 320 (the speed before “saturation), and if our speed after saturation is higher, we need to reduce the control voltage. To trigger the while loop block I’ve decided to use a simple switch. The while block meanwhile is:
Definitely not the best implementation but I was trying a lot of different combinations and without any real success. I am always getting the same error:
Was trying to use a step signal instead of the constant “7” – to model acceleration of the motor, and was getting the same error at the moment of acceleration above 320 threshold. So looks like the approach is almost right but mathematically it fails to find the most suitable solution. I’ve tried to implement a transport delay in the memory part of the while subsystem but was getting errors during compilation all the time.
Are there any obvious (and not so) mistakes? Or maybe from the beginning, I should have chosen another approach… I really hope that somebody will be able to help. Thank you in advance and have a great day.
I do not think that you have used While block correctly.
This is what I have done, I used a "Matlab function" block instead of "While" block as follows,
The function in Matlab function is
function u2=fcn(speed,u2d)
if speed>320
u2=u2d+0.25;
else
u2=u2d;
end
And the results I have got, Scope 1
Scope
Edit
As you prefer a function free model, the following may do the same.

Understanding Latency in Eye Tracking

I am having trouble understanding latency in the context of eye-tracking. I currently work with a 30Hz eye-tracker integrated into a head-mounted display for vision research.
The way I look at it, there is an overall delay in the time when the eye actually makes a movement to the time these coordinates are provided by the eye-tracking software. There are two components to this delay ->
1) The delay because of the frequency of imaging by the eye tracker (30fps -- 33.3ms)
2) The latency because of the actual algorithm that extracts data and provides coordinates.
Am I right in thinking that the total delay is the sum of 1) and 2) ?
I spoke with the company that makes the eye-tracker, and they said the latency in eye tracking is 60ms. Does that mean that my overall latency is 60ms + (1000/30Hz) ~ 93.3ms ?
Or does the 60ms figure somehow take into account the FPS of the eye camera?
Different vendors use different definitions of eye tracking latency so you have to ask your specific vendor what definition they use when they state 60 ms latency.
As an example Tobii, one of the largest eye tracking vendors, uses the following definitions:
Processing latency: Describes the time required by the eye tracker processor to perform image processing and eye gaze computations.
Total system latency: The duration from mid-point of the eye image exposure, to when a sample is available via the API on the client computer. This includes half of the image exposure time, plus image read-out and transfer time, processing time and time to transfer the data sample to a client computer.
I would assume that the number the company gave you 60ms is the total latency of everything. This is assuming that they are the ones providing the software and the camera.
If the cameras latency is not included then your calculation for the latency should be correct.

Simulation: send packets according to exponential distribution

I am trying to build a network simulation (aloha like) where n nodes decide at any instant whether they have to send or not according to an exponential distribution (exponentially distributed arrival times).
What I have done so far is: I set a master clock in a for loop which ticks and any node will start sending at this instant (tick) only if a sample I draw from a uniform [0,1] for this instant is greater than 0.99999; i.e. at any time instant a node has 0.00001 probability of sending (very close to zero as the exponential distribution requires).
Can these arrival times be considered exponentially distributed at each node and if yes with what parameter?
What you're doing is called a time-step simulation, and can be terribly inefficient. Each tick in your master clock for loop represents a delta-t increment in time, and in each tick you have a laundry list of "did this happen?" possible updates. The larger the time ticks are, the lower the resolution of your model will be. Small time ticks will give better resolution, but really bog down the execution.
To answer your direct questions, you're actually generating a geometric distribution. That will provide a discrete time approximation to the exponential distribution. The expected value of a geometric (in terms of number of ticks) is 1/p, while the expected value of an exponential with rate lambda is 1/lambda, so effectively p corresponds to the exponential's rate per whatever unit of time a tick corresponds to. For instance, with your stated value p = 0.00001, if a tick is a millisecond then you're approximating an exponential with a rate of 1 occurrence per 100 seconds, or a mean of 100 seconds between occurrences.
You'd probably do much better to adopt a discrete-event modeling viewpoint. If the time between network sends follows the exponential distribution, once a send event occurs you can schedule when the next one will occur. You maintain a priority queue of pending events, and after handling the logic of the current event you poll the priority queue to see what happens next. Pull the event notice off the queue, update the simulation clock to the time of that event, and dispatch control to a method/function corresponding to the state update logic of that event. Since nothing happens between events, you can skip over large swatches of time. That makes the discrete-event paradigm much more efficient than the time step approach unless the model state needs updating in pretty much every time step. If you want more information about how to implement such models, check out this tutorial paper.