I'm making an app to insert GPS positions on a Traccar server. The "documentation" here https://www.traccar.org/osmand/ says that I must to send params using this API example
http://demo.traccar.org:5055/?id=123456&lat={0}&lon={1}×tamp={2}&hdop={3}&altitude={4}&speed={5}
But nothing about hdop parameter. What is that?
HDOP stands for horizontal dilution of precision. It's a term used to specify the Error propagation as a mathematical effect of satellite geometry on positional measurement precision. Essentially it indicates location accuracy.
But you don't actually have to provide it. HDOP parameter is optional as are most of the other parameters in the request.
Related
I would like to use a CatBoost regressor for insurance applications (Poisson objective). As I need to fix the exposure, how can I set the offset of log_exposure? When using xgboost I use "base_margin", while for lightgbm I use the "init_score" params. Is there an equivalent in CatBoost?
Just use the "set_scale_and_bias(scale, bias)" method on your CatBoostRegressor model.
the bias parameter will set the offset of the model prediction results, while the scale parameter should be left as its default which is 1.
For your Insurance Poisson objective the bias should be set to log(exposure).
See more details here: CatBoost documentation
After looking on the documentation, I found a viable solution. The fit method of both the CatBoostRegressor and CatboostClassifier provides a baseline and a sample_weight parameter that can be directly use to set an offset (for prior exposure) or a sample weight (for severity modeling).
Btw, the optimal approach is to create Pools and providing there the specification of offset and weights:
freq_train_pool = Pool(data=freq_train_ds, label=claim_nmb_train.values,cat_features=xvars_cat,baseline=claim_model_offset_train.values)
freq_valid_pool = Pool(data=freq_valid_ds, label=claim_nmb_valid.values,cat_features=xvars_cat,baseline=claim_model_offset_valid.values)
freq_test_pool = Pool(data=freq_test_ds, label=claim_nmb_test.values,cat_features=xvars_cat,baseline=claim_model_offset_test.values)
Here the data parameters contain pd.DataFrame with the predictors only, the label one che actual number of claim, cat_features are character lists specifying the categorical terms and the baseline terms are the np.array of log exposure. It works.
Using Pools allows to provide evaluation sets in the fit method.
I remember reading once in the Matlab documentation about an optimisation algorithm which allowed the user to specify the "scale" of variation expected for each parameter during the search (at least initially).
I can't remember what this function is, but now I am using fminsearch and there is no such option. In fact, I can't even specify parameter bounds, and the documentation states that it takes 5% of the initial guess as a default step (or 25e-5 if 0). Because this seems to be a relative choice to the initial guess, it makes me think that perhaps I should re-normalise my parameters to a suitable scale, in order to indirectly define a suitable step for my optimisation problem.
For example, if I have a parameter which value is on the order of 10e5 but that I would like steps on the order of 100, then I should "divide it" by 500 during optimisation (obviously I would then multiply it when computing the objective function). However this becomes trickier if a parameter range is centred around 0 for example; then I can rescale it and offset it.
My question is; is it effectively what people usually do when using the downhill-simplex method, and is there a "standard" or "better" way to do it?
I need to implement an anti-windup (output limitation) for my PID controller. Simulink is offering two options: back calculation and clamping (documentation) which seem to deliver equal results. I know what back calculation is doing mathematically. It requires to define the back-calculation gain Kb. This gain is dependent on how long my controller is saturated, therefore it is actually a dynamic value (because I may have a high variation of saturation times). Do you see a way to control this value? (in this case it probably would be necessary to build my own PID Controller as shown in the documentation above or in the picture below.
Which brings me to the question, what is clamping actually doing? And what are other differences? Which one is faster, which one is more robust against stiff slopes? Does anybody has experiences using both?
Not sure if this fully answers the question, but the PID Controller documentation page, explains a bit more about clamping:
clamping
Stops integration when the sum of the block components
exceeds the output limits and the integrator output and block input
have the same sign. Resumes integration when the sum of the block
components exceeds the output limits and the integrator output and
block input have opposite sign. The integrator portion of the block
is:
The clamping circuit implements the logic necessary to determine whether integration continues.
If you select the clamping option and look under the mask, you can probably see the details of the clamping circuit.
Additionally to am304's answer there are some more things to consider.
Clamping
Clamping will always work. It detects when there is integrator overflow and sets the integral path of the PID-controller to zero to avoid windup by using a simple switch.
Clamping is a commmonly used anti windup method, especially in case of digital control systems. In serious applications however, there is also forward clamping involved - evaluating the controller input as well. This mechanism must me implemented manually.
Back Calculation
Back Calculation highly depends on the back calculation coefficient Kb. If you don't know how to actually calculate the parameter Kb don't use back-calculation. This method calculates the difference between the actual controller output and the saturated output and subtracts it from the I-Gain path, amplified by Kb.
In most of cases the default value Kb = 1 will lead to worse results than clamping, it is even possible that it has no effect at all. Kb should be calculated based on the sampling time or
in case a D-Gain is involded, based on D- and I-Gain. Appropriate literatur should be consulted to calculate the coefficient. Back calculation with a properly set coeffient enables better dynamics than clamping!
I am trying to find more information on making a custom PID block in MATLAB. I have most of it done but there are a few parameters that I don't really understand and as such I don't know what value to give them. NOTE I am NOT asking for help tuning PID gains.
They are all inside the filter coefficient block:
When I open the block I have to set a few parameters (output min/max, data type, parameter min/max, etc.). Can someone explain to me what these mean? I can't find good resources anywhere. The only thing that I've tried which works is setting each to [] (i.e. -inf) and the input/output data types to 'Inherit: Inherit via internal rule' but then my output goes to hell. If I copy paste the blocks from the PID block there are a bunch of variables which I haven't defined anywhere so the program won't even compile.
Can someone point out some good resources for this or else explain it? Thanks!
You should get your blocks from the standard Simulink library, not from under the PID block mask. The ones under the mask have been set-up to use variables that are passed from/through the mask, which you are not doing.
The block you have circled is just a gain block (from the Math library).
You most likely won't need to make any changes to the default settings of the block other than the constant value (which needs to be the N that you want to use in the approximation of the derivative term in your controller).
To answer your specific question about what the parameters are, some of them are used to specify data types (if you don't want to use the default double precision), some are only used in code generation, some others only for other specific tasks.
All of them are described (in more, or sometimes less, detail) in the doc for the block, obtained by pressing the help button on the block's dialog.
Using this Simulink model file as a reference, I'm trying to figure out the two following errors:
alt text http://imagebin.ca/img/dSV8YO.png
alt text http://imagebin.ca/img/OXDf0v.png
I have no idea what has gone wrong with the data type consistency/conversion problems. Do you know what the error messages mean exactly in the context of a model? It would be great to get an interpretation of the problem to solve it. Thanks in advance.
Is the block 'Inner Loop/e^(-s)' driving the block 'Inner Loop/Sum'? It looks like the 'e^(-s)' block is trying to set the Sum block to be double, but the Sum block is already set to some other data type. I'm not sure why that's happening, but here's a snippet from the help for the Sum block documentation,
Inherit: Inherit via internal rule
Simulink chooses a combination of output scaling and data type that requires the smallest amount of memory consistent with accommodating the calculated output range and maintaining the output precision of the block and with the word size of the targeted hardware implementation specified for the model. If the Device type parameter on the Hardware Implementation configuration parameters pane is set to ASIC/FPGA, Simulink software chooses the output data type without regard to hardware constraints. Otherwise, Simulink software chooses the smallest available hardware data type capable of meeting the range and precision constraints. For example, if the block multiplies an input of type int8 by a gain of int16 and ASIC/FPGA is specified as the targeted hardware type, the output data type is sfix24. If Unspecified (assume 32-bit Generic), i.e., a generic 32-bit microprocessor, is specified as the target hardware, the output data type is int32. If none of the word lengths provided by the target microprocessor can accommodate the output range, Simulink software displays an error message in the Simulation Diagnostics Viewer.
You can try forcing the output data type to be double, if that's what you really want, or you can try putting a Data Type Conversion block in front of the Sum block. One other thing that can help is to try turning on Port Data Types from the Format menu. It should show you all the propagated data types when the error happens.