Calculate number of parameters in neural network - neural-network

I am wondering would the number of parameters in the models like ResNet18, Vgg16, and DenseNet201 would change if we change the input size to the model?
I did measure the number of parameters with the following command
pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
Also, I have tried this snippet, and the number of parameters did not change for different input size
import torchvision.models as models
model= models.resnet18(pretrained = False)
model.cuda()
summary(model, (1,64,64))

No it would not. Parameters of a model have the purpose of processing the input as it propagates inside the network pipeline.
The parameters are mostly trained to serve their purpose, which is defined by the training task. Consider a increase in number of parameters based on the input? What would their values be? Would they be random? How would this new parameters with new values affect the inference of the model?
Such a sudden, random change to the fine-tuned, well-trained parameters of the model would be impractical. Maybe there are some other algorithms that I am unaware of, that change their parameter collection based on input. But the architectures that have been mentioned in question do not support such functionality.

Traninable parameters do not change with the change in input. If you see the weights in first layer of the model with the command list(model.parameters())[0].shape you can realize that it does not depend on the height and width of the input, but it depends on the number of channels(e.g Gray, RGB, HyperSpectral), which usually is very insignificant in bigger models. For further information about getting the input shape, you can see this toy example.

Related

base_margin or init_score for catboost regressor

I would like to use a CatBoost regressor for insurance applications (Poisson objective). As I need to fix the exposure, how can I set the offset of log_exposure? When using xgboost I use "base_margin", while for lightgbm I use the "init_score" params. Is there an equivalent in CatBoost?
Just use the "set_scale_and_bias(scale, bias)" method on your CatBoostRegressor model.
the bias parameter will set the offset of the model prediction results, while the scale parameter should be left as its default which is 1.
For your Insurance Poisson objective the bias should be set to log(exposure).
See more details here: CatBoost documentation
After looking on the documentation, I found a viable solution. The fit method of both the CatBoostRegressor and CatboostClassifier provides a baseline and a sample_weight parameter that can be directly use to set an offset (for prior exposure) or a sample weight (for severity modeling).
Btw, the optimal approach is to create Pools and providing there the specification of offset and weights:
freq_train_pool = Pool(data=freq_train_ds, label=claim_nmb_train.values,cat_features=xvars_cat,baseline=claim_model_offset_train.values)
freq_valid_pool = Pool(data=freq_valid_ds, label=claim_nmb_valid.values,cat_features=xvars_cat,baseline=claim_model_offset_valid.values)
freq_test_pool = Pool(data=freq_test_ds, label=claim_nmb_test.values,cat_features=xvars_cat,baseline=claim_model_offset_test.values)
Here the data parameters contain pd.DataFrame with the predictors only, the label one che actual number of claim, cat_features are character lists specifying the categorical terms and the baseline terms are the np.array of log exposure. It works.
Using Pools allows to provide evaluation sets in the fit method.

Large Neural Network Pruning

I have done some experiments on neural network pruning, but only on small models. I used to prune the relevant weights as follows (similarly as it is explained in the official tutorial https://pytorch.org/tutorials/intermediate/pruning_tutorial.html):
for name,module in model.named_modules():
if 'layer' in name:
parameters_to_prune.append((getattr(model, name),'weight'))
prune.global_unstructured(
parameters_to_prune,
pruning_method=prune.L1Unstructured,
amount=sparsity_constant,
)
The main problem in doing this, is that I have to define a list (or tuple) of layers to prune. This works when I define my model by hands and I know the name of different layers (for example, in the code provided, I was aware of the fact that all the fully connected layers, had the string "layer" in their name.
How can I avoid this process, and define a pruning method that prunes all the parameters of a given model, without having to call the layers by name?
All in all, I'm looking for a function that, given a model and a constant of sparsity, globally prunes the given model (by masking it):
model = models.ResNet18()
function_that_prunes(model, sparsity_constant)

Predictions using Convolutional Neural Networks and DL4J

This is my first time working with DL4J (Deep Learning for Java) and also my first Convolutional Neural Network. My Goal is to use the Convolutional Neural Netowrk to give me some predicted values about an image. I gathered and labelled my images myself. The labels or expected outputs consist of two numbers between 0 and 1 (I just wrote them in the file name like 0.01x0.87.jpg).
Now I can't find any way to use the DataSetIterator Class which DL4J uses so that I can also set my label values.
Is there a simple way to tell DL4J that I want to train my Network to recognize that image 0.01x0.01.jpg should spit out the values 0.01 and 0.01?
What you want to do is usually known as regression. In contrast to classification where you want to either have a 0 or 1 output, in regression any value can be the target.
In your case, you will likely want to use a network architecture that uses either a sigmoid (which forces your values to be between 0 and 1) or an identity (which keeps the values as is, i.e. allows for them to be outside of the 0 to 1 range) activation function.
As you have two values that you are trying to predict, you will have to also define that you are using two outputs.
So much for your model architecture.
For data loading, you can use the ImageRecordReader, but also pass it a PathMultiLabelGenerator of your own. When you implement the PathMultiLabelGenerator interface, you will get the full path of the image as a string, and you can do whatever you want with it, like for example remove the file ending, split on x and parse your filename into a list of DoubleWritable. DoubleWritable is just a simple wrapper class for double so creating that is as easy as just instantiating it by passing the actual value to the constructor.
To create a dataset iterator you can now follow the documentation on RecordReaderDataSetIterator.

Define Model Parameter as Variable

I am attempting to define the parameter of a model (block) as a variable. For example:
Real WallThickness = 0.5;
Real WallConductance = 10*WallThickness;
Modelica.Thermal.HeatTransfer.Components.ThermalConductor TopPanelConductor(G=WallConductance);
I would like to define "G" so that it remains constant throughout the simulation but the coefficient is updated prior to the simulation based on the other variable "WallThickness". When defining the ThermalConductor parameter "G" as a variable in the model, which is being calculated elsewhere, I get the error message:
The variability of the definition equation:
TopPanelConductor.G = WallConductance;
is higher than the declared variability of the variables.
I would like to define the parameters of a model as a variable. This allows me to create parametric definitions as the geometry of the all changes. Is there a way I can make this definition work?
You mean the geometry changes during simulation? If so, you'll have to rewrite the ThermalConductor model to work with a variable G, because a variable cannot be assigned to a parameter. A variable may vary during the course of simulation. A parameter is fixed at the start of simulation, but can be changed from run to run without recompiling the model, which allows for quicker iteration/design work.
Note that you can also calculate a parameter from other parameters that you define, e.g. to calculate a heat transfer coefficient from a given wall thickness (which you vary from simulation run to simulation run).
An alternative to re-writing the component models is to make the parameter study/variation outside the simulation model. There are at least three approaches:
Export your system model as an FMU (Co-simulation). Import it in Python w. PyFmi and write for loops that vary the parameter value for each iteration. See for example http://www.jmodelica.org/assimulo_home/pyfmi_1.0/pyfmi.examples.html. This is not as complicated as it might sound.
Make the parameter variation loop in a Modelica Script (mos file). I don't have much experience with this though.
If you are varying geometrical parameters in order to find an optimum of some kind you can use the Optimization Library which is shipped with Dymola (as of version 2017 FD01).
Using one of the above suggestions you can reuse all the components from MSL out of the box.
Best regards,
Rene Just Nielsen
There is a heirachery for varaibales/parameters that restrict their use. As you are now aware, parameters are not permitted to vary with within the simulations. Thus, you get the error stating that you are trying to define a parameter with a variable value or input variable.
If you need that functionality I would recommend duplicating the ThermalConductor and change the variable type:
parameter Modelica.SIunits.ThermalConductance G
"Constant thermal conductance of material";
to
input Modelica.SIunits.ThermalConductance G
"Constant thermal conductance of material" annotation (Dialog(group=”Input Variables”));
That all there is to it. Note the additional annotation on the input variable. By default inputs do not show up in the parameter GUI. The annotation will permit them to be seen just like parameters (be careful to clearly label it an input variable versus a parameter though!)
There is work underway that has completely redone the Thermal library but is not yet released and the most-straightforward approach would probably try what I have discussed.

RapidMiner: Ability to classify based off user set support threshold?

I am have built a small text analysis model that is classifying small text files as either good, bad, or neutral. I was using a Support-Vector Machine as my classifier. However, I was wondering if instead of classifying all three I could classify into either Good or Bad but if the support for that text file is below .7 or some user specified threshold it would classify that text file as neutral. I know this isn't looked at as the best way of doing this, I am just trying to see what would happen if I took a different approach.
The operator Drop Uncertain Predictions might be what you want.
After you have applied your model to some test data, the resulting example set will have a prediction and two new attributes called confidence(Good) and confidence(Bad). These confidences are between 0 and 1 and for the two class case they will sum to 1 for each example within the example set. The highest confidence dictates the value of the prediction.
The Drop Uncertain Predictions operator requires a min confidence parameter and will set the prediction to missing if the maximum confidence it finds is below this value (you can also have different confidences for different class values for more advanced investigations).
You could then use the Replace Missing Values operator to change all missing predictions to be a text value of your choice.