How can I change the stiffness of spring during analysis in Abaqus.? - subroutine

For example, I have this statik system:enter image description here
My goal:
I want to investigate the behavior of the beam, when a stiffness of the spring is changed during analysis
At the "Step-1" the system is loaded by means of concentrated load "F".
I want:
At "Step-2" the system shall be loaded by means of same load "F", but the stiffness of spring shall be changed.
Because of, the spring is not a type of boundary conditions in Abaqus, I can't change the stiffness at "Step-2". I can change the stiffness after a "job" is finished. It means, that I have alwas a new system. I want to change the stiffness in already deformed system.
Many thanks in advance!

You can define the dependency of spring stiffness on a field variable. Then controlling this variable will let you change the stiffness as needed.

Related

Changing the parameter of the controlling system would cause the system stiff?

I got a model working fine with the following controlling system parameters,
but if I change one of the parameters, the system would be stiff and no chance to solve it at all.
So my question is:
Why changing just one parameter would cause the system stiff?
If I meet the stiff problem again, how could I locate the exact parameter that causes the problem?
DASSL is an implicit solver and should therefore be able to deal with stiff systems pretty well. Still it seems there are many >500 steps it has to do within <2s, as this is your output interval (which causes the message). In your case this could relate to fast dynamics that happen within the model.
Regarding your questions:
If the model simulates to the end, check the controlled variables and see if the have fast oscillations (Frequency of > 100Hz) occur. This can happen when increasing the proportional gain of the controller, which is making the overall system "less stable".
A general advice on this is pretty difficult, but the linearSystems2 library can help. Creating a "Full Linear Analysis" gives a list of states and how they correlate to poles. The poles with highest frequency are usually responsible for the stiffness and from seeing which states relate to poles of interest, indicates which states to investigate. The way from the state to the parameter is up to the modeler - at least I don't know a general advice on this.
For 2. applied to Modelica.Blocks.Examples.PID_Controller the result looks like:
Seeing that likely the spring causes the fastest states in the system.
The answer is yes! Changing only one parameter value may cause the system to be stiff.
Assuming that a given model maps to an explicit ODE system:
dx/dt = f(x,p,...)
Conventionally, a system can be characterized as stiff via some stiffness indices expressed in terms of the eigenvalues of the Jacobian df/dx. For instance, one of these indices is the stiffness ratio: the ratio of the largest eigenvalue to the smallest eigenvalue of the Jacobian. If this ratio is large, some literature assume > 10^5, then the system is characterized to be stiff around the chosen initial and the parameter values.
The Jacobian df/dx as well as its eigenvalues is a time-dependent function of p and initial values. So theoretically and depending on the given system, one single parameter could be capable of causing such undesired system behavior.
Having a way to access the Jacobian and to perform eigenvalue analysis together with parametric sensitivity analysis, e.g. via computation of dynamic parameter sensitivities, identifying such evil parameters is possible.

What is an appropriate value of the parameter "Size" in nnet function in R?

I read somewhere that is should be which.is.max of the nnet model. Is there a rule of thumb to define the value for Size ?
Unfortunately, a single appropriate size hyperparameter does not exist. This value (as well as weight decay) depends on the data and the application at hand. Cross-validation procedures may provide you with decent values for a specific dataset. You should try Random search or Grid Search, which are two basic (yet effective) approaches to this problem. I also recommend you to check this thread about how to choose the number of hidden layers and nodes in a feedforward neural network.

Dealing with a large kernel matrix in SVM

I have a matrix X, size 40-by-60000
while writing the SVM, I need to form a linear kernel: K = X'*X
And of course I would get an error
Requested 60000x60000 (26.8GB) array exceeds maximum array size preference.
How is it usually done? The data set is Mnist, so someone must have done this before. In this case rank(K) <= 40, I need a way to store K and later pass it to quadprog.
How is it usually done?
Usually kernel matrices for big datasets are not precomputed. Since optimisation methods used (like SMO or gradient descent) do only need access to a subset of samples in each iteration, you simply need a data structure which is a lazy kernel matrix, in other words - each time an optimiser requests K[i,j] you literally compute K(xi,xj) then. Often, there are also caching mechanisms to make sure that often requested kernel values are already prepared etc.
If you're willing to commit to a linear kernel (or any other kernel whose corresponding feature transformation is easily computed) you can avoid allocating O(N^2) memory by using a primal optimization method, which does not construct the full kernel matrix K.
Primal methods represent the model using a weighted sum of the training samples' features, and so will only take O(NxD) memory, where N and D are the number of training samples and their feature dimension.
You could also use liblinear (if you resolve the C++ issues).
Note this comment from their website: "Without using kernels, one can quickly train a much larger set via a linear classifier."
This problem occurs due to the large size of your data set, thus it exceeds the amount of RAM available in your system. In 64-bit systems data processing performs better than in 32-bit, so you'll want to check which of the two your system is.

Detect and Remove Multicollinearity in a high-dimensional time-series

I am working with a matrix of data, in matlab, with dimensions n-by-m where ('n' are the number of regressors = 61) and ('m' is the number of datapoints = 500). I have reasons to suspect that the data is highly collinear. I have been reading in the topic of 'Collinearity' and I come to realize that there are many valid options that can be used (e.g. Principal Component analysis).
Starting by the simplest approach, I tried to implement PCA. However, what PCA gives me is the following output: Principal Components, Latent Variables, POV (Percentage of Variance). I basicly creat the principal component space for my data
My question is the following, How can I, after obtaining the principal component space, convert the variables that I have (PC, Lat, POV) into the time space constraint to the relevent components (typical those that explain 99% of the variance)? In other words, how can my data reflect the PCA analysis?
Many thanks in advance

clustering vs fitting a mixture model

I have a question about using a clustering method vs fitting the same data with a distribution.
Assuming that I have a dataset with 2 features (feat_A and feat_B) and let's assume that I use a clustering algorithm to divide the data in an optimal number of clusters...say 3.
My goal is to assign for each of the input data [feat_Ai,feat_Bi] a probability (or something similar) that the point belongs to cluster 1 2 3.
a. First approach with clustering:
I cluster the data in the 3 clusters and I assign to each point the probability of belonging to a cluster depending on the distance from the center of that cluster.
b. Second approach using mixture model:
I fit a mixture model or mixture distribution to the data. Data are fit to the distribution using an expectation maximization (EM) algorithm, which assigns posterior probabilities to each component density with respect to each observation. Clusters are assigned by selecting the component that maximizes the posterior probability.
In my problem I find the cluster centers (or I fit the model if approach b. is used) with a subsample of data. Then I have to assign a probability to a lot of other data... I would like to know in presence of new data which approach is better to use to still have meaningful assignments.
I would go for a clustering method for example a kmean because:
If the new data come from a distribution different from the one used to create the mixture model, the assignment could be not correct.
With new data the posterior probability changes.
The clustering method minimizes the variance of the clusters in order to find a kind of optimal separation border, the mixture model take into consideration the variance of the data to create the model (not sure that the clusters that will be formed are separated in an optimal way).
More info about the data:
Features shouldn't be assumed dependent.
Feat_A represents the duration of a physical activity Feat_B the step counts In principle we could say that with an higher duration of the activity the step counts increase, but it is not always true.
Please help me to think and if you have any other point please let me know..