Should I use SystemVerilog 2-state data type in design (not verification)? - system-verilog

SystemVerilog has 2-state data type, should I use it in design (not verification)?
I know it would improve the simulation performance but does it have any impacts on synthesis?

Using 2-state does not necessarily improve simulation performance unless it can save you a significant amount of memory. Synthesis tools do not really care about 2-state versus 4-state unless you are specifying don't cares in either literals or parameters, not variables. So I would say it has no impact on performance.
The real issue about using 4-state versus 2-state is really in simulation and where you need or want to propagate X-values for error conditions or don't cares.

Related

How to control ordering of Matlab *optimproblem.Variables*

Matlab's optimproblem class of objects allows users to define an Integer Linear Program (ILP) problems using symbolic variables. This is dubbed the "problem-based" formulation. Internal methods takes care of setting up the detailed ILP formulation by assembling the coefficient arrays and matrices for the objective function, equality constraints, and inequality constraints. In Matlab, these details are referred to as the "structure" for the "solver-based" formulation.
Users can see the order in which the optimproblem.Variables are taken in setting up the solver-based formulation by using prob2struct to explicitly convert an optimizationproblem object into a solver-based structure. The Algorithms section of the prob2struct page, the variables are taken in the order in which they appear in the optimizationproblem.Variables property.
I haven't been able to find what determines this order. Is there any way to control het order, maybe even change it if necessary? This would allow one to control the order of the scalar variables of the archetypal ILP problem setup, i.e., the solver-based formulation.
Thanks.
Reason for this question
I'm using Matlab as a prototyping environment, and may be relying on others to develop based on the prototype, possibly calling other solver engines. An uncontrolled ordering of variables makes it hard to compare, especially if the development has a deterministic way of arranging the variables. Hence my wish to control the variable ordering. If this is not possible, it would be nice to know. I would then know to turn my attention completely to mitigating the challenge of disparately ordered variables.

Define initial parameters of a nonlinear fit with no information

I was wondering if there exists a technical way to choose initial parameters to these kind of problems (as they can take virtually any form). My question arises from the fact that my solution depends a little on initial parameters (as usual). My fit consists of 10 parameters and approximately 5120 data points (x,y,z) and has non linear constraints. I have been doing this by brute force, that is, trying parameters randomly and trying to observe a pattern but it led me nowhere.
I also have tried using MATLAB's Genetic Algorithm (to find a global optimum) but with no success as it seems my function has a ton of local minima.
For the purpose of my problem, I need justfy in some manner the reasons behind choosing initial parameters.
Without any insight on the model and likely values of the parameters, the search space is too large for anything feasible. Think that just trying ten values for every parameter corresponds to ten billion combinations.
There is no magical black box.
You can try Bayesian Optimization to find a global optimum for expensive black box functions. Matlab describes it's implementation [bayesopt][2] as
Select optimal machine learning hyperparameters using Bayesian optimization
but you can use it to optimize any function. Bayesian Optimization works by updating a prior belief over a distribution of functions with the observed data.
To speed up the optimization I would recommend adding your existing data via the InitialX and InitialObjective input arguments.

Functional Mockup Interface (FMI): Loose vs. strong coupling

I am new to the topic of co-simulation. I am familiar with the definitions (based on Trcka "COMPARISON OF CO-SIMULATIONAPPROACHES FOR BUILDING ANDHVAC/R SYSTEM SIMULATION "):
Quasi-dynamic coupling, also called loose coupling,
orping-pongcoupling, where distributed models run in sequence, and one
model uses the known output values, based on the values at the previous
time steps, of the coupled model.
Fully-dynamic coupling, also called strong coupling, oronion coupling,
where distributed models iterate withineach time step until the error
estimate falls within a predefined tolerance.
My question: Is FMI/co-simulation a loose coupling method? What is FMI/model-exchange? From my understanding, it is not a strong coupling method. Am I understanding it correct that in model-exchange, the tool that imports the FMU is collecting all ODE and algebraic equations and the tool solve the entire system with a single solver. So it is more a standard to describe models in a unified way so that they can be integrated in different simulation environments?
Thank you very much for your help
FMI/Model-exchange is targeted at the distribution of models (systems of differential algebraic equations), whereas FMI/Co-Simulation targets the distribution of models along with an appropriate solver.
Due to the many challenges in coding solvers with an appropriate support of rollback, it is hard to come by exported FMUs that can be used in a strongly coupled co-simulation.
So, to answer your question: it depends on the scenario. If you wish to simulate a strongly coupled physical system using FMI/Co-simulation, and you wish to do so with multiple FMUs, it better be that these support rollback, to avoid stability issues. If you have, for example, a scenario where one FMU simulates the physical system, and another FMU simulates a controller, then you may do well with a loose coupling approach.
It is hard to pinpoint exactly how strongly coupled two FMUs need to be before you need to apply a stabilization technique.
Have a look at the following experiment, which compares a strong coupling master with a loose coupling one.
Both master are used for the co-simulation of a strongly coupled mechanical system:
https://github.com/into-cps/case-study_mass-springer-damper
Also, see the following report (disclosure: I contributed to it :) ) for an introduction to these concepts:
https://arxiv.org/pdf/1702.00686v1
I'm not an expert on simulation solver but I'm involved in an implementation of an FMI Co-Simulation slave.
First, you are entirely right about the model-exchange.
Regarding the co-simulation, The solver sets the input values, do a step and read the output values. There is no interactions within timestep. I would say that is more a Quasi-dynamic coupling.
But it is possible for the solver to cancel the previous step in order to refine time step and redo computation, ...And so on until the error estimate falls within a predefined tolerance. That is more close to a fully-dynamic coupling.
Because it is the responsibility of the solver (Co-simulation master) to set/get input/output values and to do step (and refining timesteps), definition of coupling with other model will depends on solver.
regards,

How to interpret the discriminator's loss and the generator's loss in Generative Adversarial Nets?

I am reading people's implementation of DCGAN, especially this one in tensorflow.
In that implementation, the author draws the losses of the discriminator and of the generator, which is shown below (images come from https://github.com/carpedm20/DCGAN-tensorflow):
Both the losses of the discriminator and of the generator don't seem to follow any pattern. Unlike general neural networks, whose loss decreases along with the increase of training iteration. How to interpret the loss when training GANs?
Unfortunately, like you've said for GANs the losses are very non-intuitive. Mostly it happens down to the fact that generator and discriminator are competing against each other, hence improvement on the one means the higher loss on the other, until this other learns better on the received loss, which screws up its competitor, etc.
Now one thing that should happen often enough (depending on your data and initialisation) is that both discriminator and generator losses are converging to some permanent numbers, like this:
(it's ok for loss to bounce around a bit - it's just the evidence of the model trying to improve itself)
This loss convergence would normally signify that the GAN model found some optimum, where it can't improve more, which also should mean that it has learned well enough. (Also note, that the numbers themselves usually aren't very informative.)
Here are a few side notes, that I hope would be of help:
if loss haven't converged very well, it doesn't necessarily mean that the model hasn't learned anything - check the generated examples, sometimes they come out good enough. Alternatively, can try changing learning rate and other parameters.
if the model converged well, still check the generated examples - sometimes the generator finds one/few examples that discriminator can't distinguish from the genuine data. The trouble is it always gives out these few, not creating anything new, this is called mode collapse. Usually introducing some diversity to your data helps.
as vanilla GANs are rather unstable, I'd suggest to use some version
of the DCGAN models, as they contain some features like convolutional
layers and batch normalisation, that are supposed to help with the
stability of the convergence. (the picture above is a result of the DCGAN rather than vanilla GAN)
This is some common sense but still: like with most neural net structures tweaking the model, i.e. changing its parameters or/and architecture to fit your certain needs/data can improve the model or screw it.

Are compilers getting better at optimizing code over time, and if so at what rate? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
We know for example that Moores law states that the number of transistors on a chip doubles every 1.8-2 years (and hence computing power has been approximately increasing at this rate). This got me thinking about compiler optimizations. Are compilers getting better a making codes run faster as time goes on? If they are is there any theory as to how this performance increase scales? If I were to take a piece of code written in 1970 compiled with 1970 compiler optimizations would that same code run faster on the same machine but compiled with todays optimizations? Can I expect a piece a code written today to run faster in say a 100 years solely as the result of better optimizations/compilers (obviously independent of improvements in hardware and algorithm improvements)?
This is a complex, multi-faceted question, so let me try to hit on a few key points:
Compiler optimization theory is highly complex and is often (far) more difficult than the actual design of the language in the first place. This domain incorporates many other complex mathematical subdomains (eg, directed graph theory). Some problems in compiler optimization theory are known to be NP-complete or even undecidable (which represent the most complex categories of problems to solve).
While there are hundreds of known techniques (see here, for example) the implementation of these techniques is highly dependent on both the computer language and the targeted CPU (such as instruction set and pipelines). Because computer languages and CPUs are constantly evolving, the optimal implementations of even well-known techniques can change over time. New CPU features and architectures can also open up previously unavailable optimization techniques. Some of the most cutting-edge techniques may also be proprietary and thus not available to the general public for reuse. For example, several commerical JVMs offer specialty optimizations to the JIT-compilation of Java bytecode which are quantitatively superior to (default) open-source JVMs on a statistical basis.
There is an unmistakable historical trend toward better and better compiler optimization. This is why, for example, it is quite rare nowadays that any manual assembly coding is done regularly. But due to the factors already discussed (and others), the evolution of the efficiency and benefits provided by automatic compiler optimizations has been quite non-linear historically. This is in contrast to the fairly consistent curvature of Moore's law and other laws relating to computer hardware improvements. Compiler optimization's track record is probably better visualized as a line with many "fits and starts". Because the factors driving the non-linearity of compiler optimization theory will not likely change in the immediate future, it's likely this trajectory will remain non-linear for at least the near future.
It would be quite difficult to state even an average rate of improvement when languages themselves are coming and going, not to mention CPU models with different hardware features coming and going. CPUs have evolved different instruction sets and instruction set extensions over time, so it's quite difficult to even do an "apples to apples" comparison. This is true regardless of which metric you use: program length in terms of discrete instructions, program execution time (highly dependent on CPU clock speed and pipelining capabilities), or others.
Compiler optimization theory is probably now in the regime of diminishing returns. That is to say that most of the "low hanging" fruit have been addressed and much of the remaining optimizations are either quite complex or provide relatively small marginal improvements. Perhaps the greatest coming factor which will disruptively impact compiler optimization theory will be the advent of weak (or strong) AI. Because many of the future gains in compiler optimization theory will require highly complex predictive capabilities, the best optimizers will actually have some level of innate intelligence (for example, to predict the most common user inputs, to predict the most common execution paths, and to reduce NP-hard optimization problems into solvable sub-problems, etc.). It could very well be possible in the future that every piece of software you use is specifically custom compiled just for you in a tailored way to your specific use cases, interests, and requirements. Imagine that your OS (operating system) is specifically compiled or re-compiled just for you based on your specific use cases as a scientist vs. a video gamer vs. a corporate executive, or old vs. young, or any other combination of demographics that potentially impact code execution.