How can I save output from Simulink? - matlab

I'm a student learning to use MATLAB. For an assignment, I have to create a simple state machine and collect some results. I'm used to using Verilog/Modelsim, and I'd like to collect data only when the state machine's output changes, which is not necessarily every time/sample period.
Right now I have a model that looks like this:
RequestChart ----> ResponseChart ----> Unit Delay Block --> (Back to RequestChart)
| |
------------------------> Mux --> "To Workspace" Sink Block
I've tried setting the sink block to save as "Array" format, but it only saves 51 values. I've tried setting it to "Timeseries", but it saves tons of zero values.
Can someone give me some suggestions? Like I said, MATLAB is new to me, please let me know if I need to clarify my question or provide more information.
Edit: Here's a screen capture of my model:

Generally Simulink will output a sample at every integration step. If you want to only output data when a particular event occurs -- in this case only when some data changes -- then do the following,
run the output of the state machine into a Detect Change block (from the Logic and Bit Operations library)
run that signal into the trigger port of a Triggered Subsystem.
run the output of the state machine into the data port of the Triggered Subsystem.
inside the triggered subsystem, run the data signal into a To Workspace block.
Data will only be saved at time point that the trigger occurs, i.e. when your data changes.

In your Simulink window, make sure the Relative Tolerance is small so that you can generate many more points in between your start and ending time. Click on the Simulation option at the top of the window, then click on Model Configuration Parameters.
From there, change the Relative Tolerance to something small... like 1e-10. After that, try running your simulation again. You should have a lot more points in your output array that you can now save.

Related

Anylogic: Dynamically change source rate using variable/slider

I am trying to dynamically change the source Arrival rate using a variable "arrivalRate" linked to a slider (see image).
However, during the simulation the initial rate remains the same, even when I change the arrivalRate. I know that the arrivalRate variable is changing successfully (it is an int) - but this has no effect on the source rate during the simulation.
Anyone have an idea what the issue is - or how to fix it?
Whenever you see the = sign before a field, it means it's not dynamic, it is only evaluated at the start of the model or at the element creation and will not change throughout the simulation run unless you force it. In other words, the variable arrivalRate is checked only once to assign the source's arrival rate and that's it.
Now if you want to change it dynamically, in the slider's Action field, write the following:
source.set_rate( arrivalRate );

Editing layer via Python script startEditing, do we have to "close" editing?

I'm working on a vector layer where I have to merge all n+i [id] attributes into entity(n)[id] where entities(n+i)[id] equals the entity(n)[id], then delete all n+i entities.
All works fine but I call several times startEditing functions before commiting changes, and my question is: does calling commitChanges closes startEditing, or does it let it opened, like if it was a file descriptor or a pointer which we needed to free after the job's done?
The code is:
olayer.startEditing()
olayer.changeAttributeValue(n,id_obj,id_obj_sum,NULL,True)
olayer.commitChanges()
olayer.startEditing()
i= i-1
while i >=1:
olayer.deleteFeature(n+i)
i=i-1
olayer.commitChanges()
As you can see, we call several times olayer.startEditing, even more because all that code is in while body...
So will that spawn hordes of startEditing "pointers" or will it just continuously set the olayer editable status as "open to edition" ?
Actually the code works, but it's painfully slow, is this the reason why ?
By and large, you should not start the edit mode more than once in your layer.
You can commit at the end of all your changes to the layer, so that your modifications stay in an edit buffer in the meantime.
QgsVectorLayer.commitChanges() can let the edit mode open if you pass a False as parameter (the parameter is called stopEditing, see the docs). Like this:
# If the commit is sucessful, let the edit mode open
layer.commitChanges(False)
Also, have a look at the with edit(layer): syntax in the QGIS docs. Using such syntax you avoid starting/committing/closing the edit mode, QGIS does it for you.
with edit(layer):
# All my layer modifications here
# When this line is reached, my layer modifications are already committed.

how to edit simulink plutosdr qpsk example

I am working with this example from MathWorks: https://www.mathworks.com/help/supportpkg/plutoradio/examples/qpsk-transmitter-with-adalm-pluto-radio-1.html
When i run the example it creates an sdrqpsktx variable in the matlab workspace
I want to change sdrqpsktx.MessageBits to something smaller.
When i run the following code in matlab:
a = sdrqpsktx.MessageBits(1:448);
sdrqpsktx.MessageBits = a;
I successfully change sdrqpsktx.MessageBits to a.
However when i run this in simulink sdrqpsktx.MessageBits changes back to its original size.
How do i permanently change sdrqpsktx.MessageBits and run the example with my changes?
Thank you.
There is a model callback, probably a StartFcn, that is overwriting your changes to the variable every time you start the simulation. You either need to delete or modify that code.
To see the code go to:
File->Model Properties->Model Properties, and select the Callback tab.
Any callback that is followed by a * has code in it. Click on that callback to see the code.
See Callbacks for Customized Models for more detailed information.

Setting up openai gym

I've been given a task to set up an openai toy gym which can only be solved by an agent with memory. I've been given an example with two doors, and at time t = 0 I'm shown either 1 or -1. At t = 1 I can move to correct door and open it.
Does anyone know how I would go about starting out? I want to show that a2c or ppo can solve this using an lstm policy. How do I go about setting up environment, etc?
To create a new environment in gym format, it should have the 5 functions mentioned in the gym.core file.
https://github.com/openai/gym/blob/e689f93a425d97489e590bba0a7d4518de0dcc03/gym/core.py#L11-L35
To lay this down in steps-
Define observation space and action space for your environment, preferably using gym.spaces module.
Write down the step function which performs agent's action and returns a 4 tuple containing - next set of observations from the environment , reward ,
done - a boolean indicating whether the episode is over , and some extra info if you want.
Write a reset function for the environment to reinitialise the episode to a random start state and return a 4 tuple similar to step.
These functions are enough to be able to run an RL agent on your environment.
You can skip the render, seed and close functions if you want.
For the task you have defined,you can model the observation and action space using Discrete(2). 0 for first door and 1 for second door.
Reset would return in it's observation which door has the reward.
Then agent would choose either of the door - 0 or 1.
Then perform a environment step by calling step(action), which will return agent's reward and done flag as true - signifying that the episode is over.
Frankly, the problem you describe seems too simple to accomplish for any reinforcement learning algorithm, but I assume you have provided that as an example.
Remembering for longer horizons is usually harder.
You can read their documentation and toy environments to understand how to create one.

Anylogic: How to conditionally close/open a valve

I am very new at Anylogic. I have a simple model, using the Fluid dynamics library: two tanks and a valve between them. The valve have to open at a rate, say X, only when the amount in the first tank, say tank_1, were twice of the amount of the second tank, say tank_2
Could you please help me with that?
Regards
You probably have more conditions on what to use with the valve depending on different things. But it's a bad idea to make something occur when then tank_1 is exactly 2 times larger than tank_2... Instead, create a boolean variable that tells you if the current situation is that tank_1 is over or below 2*tank_2. Let's call it "belowTank2". I will assume tank_1 is below 2*tank_2 in the beginning of the simulation. so belowTank2 is true.
Then you create an event that runs cyclically every second or even more often if you want and you use the following code:
if(belowTank2){
if(tank_1.amount()>2*tank_2.amount()){
valve.set_openRate(0.1);
belowTank2=false;
}
}else{
if(tank_1.amount()<2*tank_2.amount()){
valve.set_openRate(0.3);
belowTank2=true;
}
}
So this means that whenever tank_1 surpases 2*tank_2, it will trigger the rate change on the valve. And it will trigger again a change when it's below 2*tank_2