Is anyone using CrossSim - crossbar simulator from Sandia national laboratories? In the manual, it is not explained about the input files reset.csv/set.csv for lookup table generation. I need to know about the rmp values in that file. What is rmp and how was it calculated?
Or are there any other crossbar array simulation software that can be used for vector multiplication, etc mainly for memristor devices?
I am in graduate school student.
I'm studying Resistive Memory Devices for Neuromorphic Computing.
I am also using this CrossSim simulator(ver. 0.2). Maybe I can help you.
Generally, a Memristor device has two terminals whose resistance value is modulated by an arbitrary voltage pulse. If this memristor undergoes higher than the threshold voltage(Vth), its state changes. otherwise, it holds its state.
So, we program it with a higher than Vth and read its state by applying a voltage lower than Vth.
In the manual, there is no specific explanation of what's in the reset.csv/set.csv file. it contains a current value that is acquired experimentally. not a calculated value. Actually, after the lookup table is generated its values becomes conductance value. That's why reading voltage is required in the create_lookup_table.py example. (conductance) = (current) / (voltage) you know.
The lookup table is for experimental data to verify when memristors come to hardware. if you want to simulation just algorithmically you don't need a lookup table. you can do this by adding the following codes.
params.numeric_params.update_model = "ANALYTIC"
I hope this is helpful to you. :)
Related
I refrained from asking for help until now, but as my thesis' deadline creeps ever closer and I do not know anybody with experience in RL, I'm trying my luck here.
TLDR;
I have not found an academic/online resource which helps me understand the correct representation of the environment as an observation space. I would be very thankful for any links or for giving me a starting point of how to model the specifics of my environment in an observation space.
Short thematic introduction
The goal of my research is to determine the viability of RL for strategy development in motorsports. This is currently achieved by simulating (lots of!) races and calculating the resulting race time (thus end-position) of different strategic decisions (which are the timing of pit stops + amount of laps to refuel for). This demands a manual input of expected inlaps (the lap a pit stop occurs) for all participants, which implicitly limits the possible strategies by human imagination as well as the amount of possible simulations.
Use of RL
A trained RL agent could decide on its own when to perform a pit stop and how much fuel should be added, in order to minizime the race time and react to probabilistic events in the simulation.
The action space is discrete(4) and represents the options to continue, pit and refuel for 2,4,6 laps respectively.
Problem
The observation space is of POMDP nature and needs to model the agent's current race position (which I hope is enough?). How would I implement the observation space accordingly?
The training is performed using OpenAI's Gym framework, but a general explanation/link to article/publication would also be appreciated very much!
Your observation could be just an integer which represents round or position the agent is in. This is obviously not a sufficient representation so you need to add more information.
A better observation could be the agents race position x1, the round the agent is in x2 and the current fuel in the tank x3. All three of these can be represented by a real number. Then you can create your observation by concating these to a vector obs = [x1, x2, x3].
I am failrly recent to MATLAB and SIMULINK. I am trying to use to Motor Control blockset in Simulink to implement an FOC algorithm. I am using the MTPA Reference block for Simulink to calculate the Id and Iq currents for me but there are parameters in the block I don't understand. What is the difference between max current and base current ? How do you calculate stator d-axis inductance ? Is this given in the motor datasheet or do we have to calculate it ? And if so, how do I do it ? What is the per-unit (PU) in I/P singal units ? Why would you choose this rather than SI units ? Also is the permanent magnet flux linkage meant to be in the motor datasheet ? I am modelling the iPower Gimbal motor GBM2804H-100T. Please your help would be really appreciated.
MTPA Reference Documentation link: https://www.mathworks.com/help/mcb/ref/mtpacontrolreference.html
I stumbled upon your question today.
Request you to try posting on MATLAB Central for faster response.
Anyway, apologies for delayed response. Pl find your answers below.
What is the difference between max current and base current ?
Max current is motor's rated current. This is also explained in the parameters tab here.
Base current is reference value used while working with PU System. This is usually higher than motor's rated current. Usually, we consider peak ac current as measured by ADCs, to be base value. But you're free to change it to any reference value. It may be same as Max current, may also be different.
How do you calculate stator d-axis inductance ?
For starters, Ld = Ls (total stator inductance) for Surface PMSMs. It's different for IPMSMs.
Is this given in the motor datasheet or do we have to calculate it ? And if so, how do I do it ?
It's usually specified by manufacturers. However, the real motor may have variation w.r.to. design specifications. Hence, you need to run some tests and measure it.
We have a tool to measure the motor parameters. See this link for more details.
What is the per-unit (PU) in I/P signal units ?
You can choose to work with SI units or PU system in the algorithm. We recommend working with PU System for efficient code generation.
For more details, refer to this page.
Try simulation/code-generation of this example. Type the variable name 'PU_System' at MATLAB Command Prompt for details related to base values.
Why would you choose this rather than SI units ?
Computational efficiency for embedded systems.
Scalability.
This is also answered on this page.
Also is the permanent magnet flux linkage meant to be in the motor datasheet ?
No. This is also measured via the parameter estimation tool.
You can also compute the PM flux linkage from backEMF constant or torque constant using the equations mentioned on this page.
I hope this was useful.
Stay safe!
--
Darshan Pandit | MathWorks
For more resources see: MATLAB Central
I am trying to develop a MATLAB Simulink model that will help me study the load of my department.
The model works, however one of the blocks goes right over my head when it comes to understanding, as I used the internet to help me with it.
Here is the main block:
The Scope Displays the Voltage, Current & Power
The "dept01" block inputs the data from .csv file and contains only [Time,Power].
Here is what goes inside of the "Electrical Department" Block:
I have no problem understanding this part, I'm simply splitting the total power into three portions.
NOTE: I am also assuming that ultimately Q=0 so Total Power = Real Power
This is the Second Step of the "Electrical Department" block which I cannot understand in any way. Maybe my concepts are weak but this part makes no sense to me.
Can someone please explain it to me that how is the block calculating Voltage & Current using just the Power??? Also how does it imitate the function of a Load so that the Energy Meter sees it as a load?
Thanks!
Load can be emulated by connecting a current source in series with the voltage source as it is seen the diagram. In your case, the controlled current source has been used. It also looks like the dependent current source is derived from the voltage. I request you to give the details of relational operator used and the sigma block. Without which you can not derive the relation ship. If the current is dependent on the voltage like the case, voltage and current can be calculated simply from the power.
I am working on a data analysis project over the summer. The main goal is to use some access logging data in the hospital about user accessing patient information and try to detect abnormal accessing behaviors. Several attributes have been chosen to characterize a user (e.g. employee role, department, zip-code) and a patient (e.g. age, sex, zip-code). There are about 13 - 15 variables under consideration.
I was using R before and now I am using Python. I am able to use either depending on any suitable tools/libraries you guys suggest.
Before I ask any question, I do want to mention that a lot of the data fields have undergone an anonymization process when handed to me, as required in the healthcare industry for the protection of personal information. Specifically, a lot of VARCHAR values are turned into random integer values, only maintaining referential integrity across the dataset.
Questions:
An exact definition of an outlier was not given (it's defined based on the behavior of most of the data, if there's a general behavior) and there's no labeled training set telling me which rows of the dataset are considered abnormal. I believe the project belongs to the area of unsupervised learning so I was looking into clustering.
Since the data is mixed (numeric and categorical), I am not sure how would clustering work with this type of data.
I've read that one could expand the categorical data and let each category in a variable to be either 0 or 1 in order to do the clustering, but then how would R/Python handle such high dimensional data for me? (simply expanding employer role would bring in ~100 more variables)
How would the result of clustering be interpreted?
Using clustering algorithm, wouldn't the potential "outliers" be grouped into clusters as well? And how am I suppose to detect them?
Also, with categorical data involved, I am not sure how "distance between points" is defined any more and does the proximity of data points indicate similar behaviors? Does expanding each category into a dummy column with true/false values help? What's the distance then?
Faced with the challenges of cluster analysis, I also started to try slicing the data up and just look at two variables at a time. For example, I would look at the age range of patients accessed by a certain employee role, and I use the quartiles and inter-quartile range to define outliers. For categorical variables, for instance, employee role and types of events being triggered, I would just look at the frequency of each event being triggered.
Can someone explain to me the problem of using quartiles with data that's not normally distributed? And what would be the remedy of this?
And in the end, which of the two approaches (or some other approaches) would you suggest? And what's the best way to use such an approach?
Thanks a lot.
You can decide upon a similarity measure for mixed data (e.g. Gower distance).
Then you can use any of the distance-based outlier detection methods.
You can use k-prototypes algorithm for mixed numeric and categorical attributes.
Here you can find a python implementation.
Is it possible to use the previous value of the time varying variable
for eg:
Suppose I have pipe whose inlet temperature is 298K with a specified uniform mass flow(m_flow), now suppose i am heating the pipe using a heater of 100 watts.
The outlet temperature will be attain a higher temperature of suppose 302K, now if i have to use this outlet temperature as my inlet temperature (in the sense i am recircuilating the water), how would i be doing it?
is it possible to update the value of the inlet temperature based on the outlet temperature at the previous timestep? so that for the next iteration the inlet temperature will be the same as the oulet temperature in the previous iteration (in other words the fluid would be recirculating).
Thanks
You cannot access the value in the previous time step. The closest you can get in Modelica is using delay(exp,T) to get the value T units of time ago.
The timestep does not enter into it at all. A model that uses information about timestep is just wrong. Nature doesn't know or care about integration time steps, the model should reflect that.
It seems to me what you want to capture is transport delay. Transport delay is the delay introduced by the time it takes for molecules, electrons, etc. through the system. So presumably what you wish to model is the time it takes the inlet fluid to reach the exit. Again, this has nothing to do with the integration timestep but rather the velocity of the fluid and the distance it must travel. Once you know how long that takes (by either a priori knowledge of the system of by looking at the simulation results themselves), you can follow Marco's suggestion of using the delay operator.
In order to setup a proper model for the system you described I suggest you to look at the example :
Modelica.Thermal.FluidHeatFlow.Examples.IndirectCooling
of the modelica standard library ver. 3.2. Instead of one pipe you can put an ambient or control volume component to better suit you needs. Moreover using continous and differentiable equations (the delay function is not) you will benefit from some of the advantages of the Modelica code, e.g. you will be able to reuse your models in a much wider range of cases, solve inverse problems, solve initial value problems, ...
I hope this helps,
Marco