I'm having trouble with the fuzzy inference system; suddenly after I reinstalled Windows OS and MATLAB in my laptop, when I tried to run the code, a message appeared: "Error: File: readfis.m Line: 1 Column: 1
The input character is not valid in MATLAB statements or expressions."
C = evalfis(B, fis_name);
Even when I try to run the example codes, the same message appears.
In a first moment I thought it could be related to the reinstallation, but after I copied the .m file into another PC and tried to run it, the situation was the same, and I don't know why...
After I tried again, I opened the specified readfis.m file; it seemed strange for me; I put it below... However, when I searched into the files of MATLAB (following path C:\Program Files\MATLAB\R2014a\toolbox\fuzzy\fuzzy) I opened the evalfis.m file, which was radically different, it was more like an usual .m file; I just copied the .m file located in the fuzzy toolbox and pasted it into the working directory, replacing the other .m file; after doing that, I could run the code succesfully.
Here I put the code of the REPLACING evalfis.m file:
function [output,IRR,ORR,ARR] = evalfis(input, fis, numofpoints);
% EVALFIS Perform fuzzy inference calculations.
%
% Y = EVALFIS(U,FIS) simulates the Fuzzy Inference System FIS for the
% input data U and returns the output data Y. For a system with N
% input variables and L output variables,
% * U is a M-by-N matrix, each row being a particular input vector
% * Y is M-by-L matrix, each row being a particular output vector.
%
% Y = EVALFIS(U,FIS,NPts) further specifies number of sample points
% on which to evaluate the membership functions over the input or output
% range. If this argument is not used, the default value is 101 points.
%
% [Y,IRR,ORR,ARR] = EVALFIS(U,FIS) also returns the following range
% variables when U is a row vector (only one set of inputs is applied):
% * IRR: the result of evaluating the input values through the membership
% functions. This is a matrix of size Nr-by-N, where Nr is the number
% of rules, and N is the number of input variables.
% * ORR: the result of evaluating the output values through the membership
% functions. This is a matrix of size NPts-by-Nr*L. The first Nr
% columns of this matrix correspond to the first output, the next Nr
% columns correspond to the second output, and so forth.
% * ARR: the NPts-by-L matrix of the aggregate values sampled at NPts
% along the output range for each output.
%
% Example:
% fis = readfis('tipper');
% out = evalfis([2 1; 4 9],fis)
% This generates the response
% out =
% 7.0169
% 19.6810
%
% See also READFIS, RULEVIEW, GENSURF.
% Kelly Liu, 10-10-97.
% Copyright 1994-2005 The MathWorks, Inc.
ni = nargin;
if ni<2
disp('Need at least two inputs');
output=[];
IRR=[];
ORR=[];
ARR=[];
return
end
% Check inputs
if ~isfis(fis)
error('The second argument must be a FIS structure.')
elseif strcmpi(fis.type,'sugeno') & ~strcmpi(fis.impMethod,'prod')
warning('Fuzzy:evalfis:ImplicationMethod','Implication method should be "prod" for Sugeno systems.')
end
[M,N] = size(input);
Nin = length(fis.input);
if M==1 & N==1,
input = input(:,ones(1,Nin));
elseif M==Nin & N~=Nin,
input = input.';
elseif N~=Nin
error(sprintf('%s\n%s',...
'The first argument should have as many columns as input variables and',...
'as many rows as independent sets of input values.'))
end
% Check the fis for empty values
checkfis(fis);
% Issue warning if inputs out of range
inRange = getfis(fis,'inRange');
InputMin = min(input,[],1);
InputMax = max(input,[],1);
if any(InputMin(:)<inRange(:,1)) | any(InputMax(:)>inRange(:,2))
warning('Fuzzy:evalfis:InputOutOfRange','Some input values are outside of the specified input range.')
end
% Compute output
if ni==2
numofpoints = 101;
end
[output,IRR,ORR,ARR] = evalfismex(input, fis, numofpoints);
...And this is the content of the REPLACED evalfis.m file:
## Copyright (C) 2011-2014 L. Markowsky <lmarkov#users.sourceforge.net>
##
## This file is part of the fuzzy-logic-toolkit.
##
## The fuzzy-logic-toolkit is free software; you can redistribute it
## and/or modify it under the terms of the GNU General Public License
## as published by the Free Software Foundation; either version 3 of
## the License, or (at your option) any later version.
##
## The fuzzy-logic-toolkit is distributed in the hope that it will be
## useful, but WITHOUT ANY WARRANTY; without even the implied warranty
## of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with the fuzzy-logic-toolkit; see the file COPYING. If not,
## see <http://www.gnu.org/licenses/>.
## -*- texinfo -*-
## #deftypefn {Function File} {#var{output} =} evalfis (#var{user_input}, #var{fis})
## #deftypefnx {Function File} {#var{output} =} evalfis (#var{user_input}, #var{fis}, #var{num_points})
## #deftypefnx {Function File} {[#var{output}, #var{rule_input}, #var{rule_output}, #var{fuzzy_output}] =} evalfis (#var{user_input}, #var{fis})
## #deftypefnx {Function File} {[#var{output}, #var{rule_input}, #var{rule_output}, #var{fuzzy_output}] =} evalfis (#var{user_input}, #var{fis}, #var{num_points})
##
## Return the crisp output(s) of an FIS for each row in a matrix of crisp input
## values.
## Also, for the last row of #var{user_input}, return the intermediate results:
##
## #table #var
## #item rule_input
## a matrix of the degree to which
## each FIS rule matches each FIS input variable
## #item rule_output
## a matrix of the fuzzy output for each (rule, FIS output) pair
## #item fuzzy_output
## a matrix of the aggregated output for each FIS output variable
## #end table
##
## The optional argument #var{num_points} specifies the number of points over
## which to evaluate the fuzzy values. The default value of #var{num_points} is
## 101.
##
## #noindent
## Argument #var{user_input}:
##
## #var{user_input} is a matrix of crisp input values. Each row
## represents one set of crisp FIS input values. For an FIS that has N inputs,
## an input matrix of z sets of input values will have the form:
##
## #example
## #group
## [input_11 input_12 ... input_1N] <-- 1st row is 1st set of inputs
## [input_21 input_22 ... input_2N] <-- 2nd row is 2nd set of inputs
## [ ... ] ...
## [input_z1 input_z2 ... input_zN] <-- zth row is zth set of inputs
## #end group
## #end example
##
## #noindent
## Return value #var{output}:
##
## #var{output} is a matrix of crisp output values. Each row represents
## the set of crisp FIS output values for the corresponding row of
## #var{user_input}. For an FIS that has M outputs, an #var{output} matrix
## corresponding to the preceding input matrix will have the form:
##
## #example
## #group
## [output_11 output_12 ... output_1M] <-- 1st row is 1st set of outputs
## [output_21 output_22 ... output_2M] <-- 2nd row is 2nd set of outputs
## [ ... ] ...
## [output_z1 output_z2 ... output_zM] <-- zth row is zth set of outputs
## #end group
## #end example
##
## #noindent
## The intermediate result #var{rule_input}:
##
## The matching degree for each (rule, input value) pair is specified by the
## #var{rule_input} matrix. For an FIS that has Q rules and N input variables,
## the matrix will have the form:
## #example
## #group
## in_1 in_2 ... in_N
## rule_1 [mu_11 mu_12 ... mu_1N]
## rule_2 [mu_21 mu_22 ... mu_2N]
## [ ... ]
## rule_Q [mu_Q1 mu_Q2 ... mu_QN]
## #end group
## #end example
##
## #noindent
## Evaluation of hedges and "not":
##
## Each element of each FIS rule antecedent and consequent indicates the
## corresponding membership function, hedge, and whether or not "not" should
## be applied to the result. The index of the membership function to be used is
## given by the positive whole number portion of the antecedent/consequent
## vector entry, the hedge is given by the fractional portion (if any), and
## "not" is indicated by a minus sign. A "0" as the integer portion in any
## position in the rule indicates that the corresponding FIS input or output
## variable is omitted from the rule.
##
## For custom hedges and the four built-in hedges "somewhat," "very,"
## "extremely," and "very very," the membership function value (without the
## hedge or "not") is raised to the power corresponding to the hedge. All
## hedges are rounded to 2 digits.
##
## For example, if "mu(x)" denotes the matching degree of the input to the
## corresponding membership function without a hedge or "not," then the final
## matching degree recorded in #var{rule_input} will be computed by applying
## the hedge and "not" in two steps. First, the hedge is applied:
##
## #example
## #group
## (fraction == .05) <=> somewhat x <=> mu(x)^0.5 <=> sqrt(mu(x))
## (fraction == .20) <=> very x <=> mu(x)^2 <=> sqr(mu(x))
## (fraction == .30) <=> extremely x <=> mu(x)^3 <=> cube(mu(x))
## (fraction == .40) <=> very very x <=> mu(x)^4
## (fraction == .dd) <=> <custom hedge> x <=> mu(x)^(dd/10)
## #end group
## #end example
##
## After applying the appropriate hedge, "not" is calculated by:
## #example
## minus sign present <=> not x <=> 1 - mu(x)
## minus sign and hedge present <=> not <hedge> x <=> 1 - mu(x)^(dd/10)
## #end example
##
## Hedges and "not" in the consequent are handled similarly.
##
## #noindent
## The intermediate result #var{rule_output}:
##
## For either a Mamdani-type FIS (that is, an FIS that does not have constant or
## linear output membership functions) or a Sugeno-type FIS (that is, an FIS
## that has only constant and linear output membership functions),
## #var{rule_output} specifies the fuzzy output for each (rule, FIS output) pair.
## The format of rule_output depends on the FIS type.
##
## For a Mamdani-type FIS, #var{rule_output} is a #var{num_points} x (Q * M)
## matrix, where Q is the number of rules and M is the number of FIS output
## variables. Each column of this matrix gives the y-values of the fuzzy
## output for a single (rule, FIS output) pair.
##
## #example
## #group
## Q cols Q cols Q cols
## --------------- --------------- ---------------
## out_1 ... out_1 out_2 ... out_2 ... out_M ... out_M
## 1 [ ]
## 2 [ ]
## ... [ ]
## num_points [ ]
## #end group
## #end example
##
## For a Sugeno-type FIS, #var{rule_output} is a 2 x (Q * M) matrix.
## Each column of this matrix gives the (location, height) pair of the
## singleton output for a single (rule, FIS output) pair.
##
## #example
## #group
## Q cols Q cols Q cols
## --------------- --------------- ---------------
## out_1 ... out_1 out_2 ... out_2 ... out_M ... out_M
## location [ ]
## height [ ]
## #end group
## #end example
##
## #noindent
## The intermediate result #var{fuzzy_output}:
##
## The format of #var{fuzzy_output} depends on the FIS type ('mamdani' or
## 'sugeno').
##
## For either a Mamdani-type FIS or a Sugeno-type FIS, #var{fuzzy_output}
## specifies the aggregated fuzzy output for each FIS output.
##
## For a Mamdani-type FIS, the aggregated #var{fuzzy_output} is a
## #var{num_points} x M matrix. Each column of this matrix gives the y-values
## of the fuzzy output for a single FIS output, aggregated over all rules.
##
## #example
## #group
## out_1 out_2 ... out_M
## 1 [ ]
## 2 [ ]
## ... [ ]
## num_points [ ]
## #end group
## #end example
##
## For a Sugeno-type FIS, the aggregated output for each FIS output is a 2 x L
## matrix, where L is the number of distinct singleton locations in the
## #var{rule_output} for that FIS output:
##
## #example
## #group
## singleton_1 singleton_2 ... singleton_L
## location [ ]
## height [ ]
## #end group
## #end example
##
## Then #var{fuzzy_output} is a vector of M structures, each of which has an index and
## one of these matrices.
##
## #noindent
## Examples:
##
## Seven examples of using evalfis are shown in:
## #itemize #bullet
## #item
## cubic_approx_demo.m
## #item
## heart_disease_demo_1.m
## #item
## heart_disease_demo_2.m
## #item
## investment_portfolio_demo.m
## #item
## linear_tip_demo.m
## #item
## mamdani_tip_demo.m
## #item
## sugeno_tip_demo.m
## #end itemize
##
## #seealso{cubic_approx_demo, heart_disease_demo_1, heart_disease_demo_2, investment_portfolio_demo, linear_tip_demo, mamdani_tip_demo, sugeno_tip_demo}
## #end deftypefn
## Author: L. Markowsky
## Keywords: fuzzy-logic-toolkit fuzzy inference system fis
## Directory: fuzzy-logic-toolkit/inst/
## Filename: evalfis.m
## Last-Modified: 20 Aug 2012
function [output, rule_input, rule_output, fuzzy_output] = ...
evalfis (user_input, fis, num_points = 101)
## If evalfis was called with an incorrect number of arguments, or
## the arguments do not have the correct type, print an error message
## and halt.
if ((nargin != 2) && (nargin != 3))
puts ("Type 'help evalfis' for more information.\n");
error ("evalfis requires 2 or 3 arguments\n");
elseif (!is_fis (fis))
puts ("Type 'help evalfis' for more information.\n");
error ("evalfis's second argument must be an FIS structure\n");
elseif (!is_input_matrix (user_input, fis))
puts ("Type 'help evalfis' for more information.\n");
error ("evalfis's 1st argument must be a matrix of input values\n");
elseif (!is_pos_int (num_points))
puts ("Type 'help evalfis' for more information.\n");
error ("evalfis's third argument must be a positive integer\n");
endif
## Call a private function to compute the output.
## (The private function is also called by gensurf.)
[output, rule_input, rule_output, fuzzy_output] = ...
evalfis_private (user_input, fis, num_points);
endfunction
Related
I use ipython in a terminal (NOT in a notebook), and by default it autoindents with 4 spaces.
How do I change the number of automatically-inserted spaces?
Number of spaces inserted by the TAB key
Assuming you are on Linux, you can locate your ipython installation directory with:
which ipython
It will return you a path which ends in /bin/ipython. Change directory to that path without the ending part /bin/ipython.
Then locate the shortcuts.py file where the indent buffer is defined:
find ./ -type f -name "shortcuts.py"
And in that file, replace 4 in the below function by 2:
def indent_buffer(event):
event.current_buffer.insert_text(' ' * 4)
Unfortunately, the 4 above is not exposed as a configuration, so we currently have to edit each ipython installation. That's cumbersome when working with many environments.
Number of spaces inserted by autoindent
Visit /path/to/your/IPython/core/inputtransformer2.py and modify two locations where the number of spaces is hard-coded as 4:
diff --git a/IPython/core/inputtransformer2.py b/IPython/core/inputtransformer2.py
index 37f0e7699..7f6f4ddb7 100644
--- a/IPython/core/inputtransformer2.py
+++ b/IPython/core/inputtransformer2.py
## -563,6 +563,7 ## def show_linewise_tokens(s: str):
# Arbitrary limit to prevent getting stuck in infinite loops
TRANSFORM_LOOP_LIMIT = 500
+INDENT_SPACES = 2 # or whatever you prefer!
class TransformerManager:
"""Applies various transformations to a cell or code block.
## -744,7 +745,7 ## def check_complete(self, cell: str):
ix += 1
indent = tokens_by_line[-1][ix].start[1]
- return 'incomplete', indent + 4
+ return 'incomplete', indent + INDENT_SPACES
if tokens_by_line[-1][0].line.endswith('\\'):
return 'incomplete', None
## -778,7 +779,7 ## def find_last_indent(lines):
m = _indent_re.match(lines[-1])
if not m:
return 0
- return len(m.group(0).replace('\t', ' '*4))
+ return len(m.group(0).replace('\t', ' '*INDENT_SPACES))
class MaybeAsyncCompile(Compile):
This is a followup question from a response here.
Below, I can generate jaccard similarity for adjacent pairs (1993 then 1994). How can I use as.list() to return a list of unique items for each comparison? Here's what I'm working with:
library("quanteda")
data_corpus_inaugural$president <- paste(data_corpus_inaugural$President,
data_corpus_inaugural$FirstName,
sep = ", "
)
head(data_corpus_inaugural$president, 10)
## [1] "Washington, George" "Washington, George" "Adams, John"
## [4] "Jefferson, Thomas" "Jefferson, Thomas" "Madison, James"
## [7] "Madison, James" "Monroe, James" "Monroe, James"
## [10] "Adams, John Quincy"
simpairs <- lapply(unique(data_corpus_inaugural$president), function(x) {
dfmat <- corpus_subset(data_corpus_inaugural, president == x) %>%
dfm(remove_punct = TRUE)
df <- data.frame()
years <- sort(dfmat$Year)
for (i in seq_along(years)[-length(years)]) {
sim <- textstat_simil(
dfm_subset(dfmat, Year %in% c(years[i], years[i + 1])),
method = "jaccard"
)
df <- rbind(df, as.data.frame(sim))
}
df
})
do.call(rbind, simpairs)
## document1 document2 jaccard
## 1 1789-Washington 1793-Washington 0.09250399
## 2 1801-Jefferson 1805-Jefferson 0.20512821
## 3 1809-Madison 1813-Madison 0.20138889
## 4 1817-Monroe 1821-Monroe 0.29436202
## 5 1829-Jackson 1833-Jackson 0.20693928
## 6 1861-Lincoln 1865-Lincoln 0.14055885
## 7 1869-Grant 1873-Grant 0.20981595
## 8 1885-Cleveland 1893-Cleveland 0.23037543
## 9 1897-McKinley 1901-McKinley 0.25031211
## 10 1913-Wilson 1917-Wilson 0.21285564
## 11 1933-Roosevelt 1937-Roosevelt 0.20956522
## 12 1937-Roosevelt 1941-Roosevelt 0.20081549
## 13 1941-Roosevelt 1945-Roosevelt 0.18740157
## 14 1953-Eisenhower 1957-Eisenhower 0.21566976
## 15 1969-Nixon 1973-Nixon 0.23451777
## 16 1981-Reagan 1985-Reagan 0.24381368
## 17 1993-Clinton 1997-Clinton 0.24199623
## 18 2001-Bush 2005-Bush 0.24170616
## 19 2009-Obama 2013-Obama 0.24739195
So, for example, how would I retrieve the items in 1789-Washington that are not in 1793-Washington, and vice versa? ... And retrieve unique items for each pair of adjacent texts?
I want to conduct a simple two sample t-test in R to compare marginal effects that are generated by ggpredict (or ggeffect).
Both ggpredict and ggeffect provide nice outputs: (1) table (pred prob / std error / CIs) and (2) plot. However, it does not provide p-values for assessing statistical significance of the marginal effects (i.e., is the difference between the two predicted probabilities difference from zero?). Further, since I’m working with Interaction Effects, I'm also interested in a two sample t-tests for the First Differences (between two marginal effects) and the Second Differences.
Is there an easy way to run the relevant t tests with ggpredict/ggeffect output? Other options?
Attaching:
. reprex code with fictitious data
. To be specific: I want to test the following "1st differences":
--> .67 - .33=.34 (diff from zero?)
--> .5 - .5 = 0 (diff from zero?)
...and the following Second difference:
--> 0.0 - .34 = .34 (diff from zero?)
See also Figure 12 / Table 3 in Mize 2019 (interaction effects in nonlinear models)
Thanks Scott
library(mlogit)
#> Loading required package: dfidx
#>
#> Attaching package: 'dfidx'
#> The following object is masked from 'package:stats':
#>
#> filter
library(sjPlot)
library(ggeffects)
# create ex. data set. 1 row per respondent (dataset shows 2 resp). Each resp answers 3 choice sets, w/ 2 alternatives in each set.
cedata.1 <- data.frame( id = c(1,1,1,1,1,1,2,2,2,2,2,2), # respondent ID.
QES = c(1,1,2,2,3,3,1,1,2,2,3,3), # Choice set (with 2 alternatives)
Alt = c(1,2,1,2,1,2,1,2,1,2,1,2), # Alt 1 or Alt 2 in choice set
LOC = c(0,0,1,1,0,1,0,1,1,0,0,1), # attribute describing alternative. binary categorical variable
SIZE = c(1,1,1,0,0,1,0,0,1,1,0,1), # attribute describing alternative. binary categorical variable
Choice = c(0,1,1,0,1,0,0,1,0,1,0,1), # if alternative is Chosen (1) or not (0)
gender = c(1,1,1,1,1,1,0,0,0,0,0,0) # male or female (repeats for each indivdual)
)
# convert dep var Choice to factor as required by sjPlot
cedata.1$Choice <- as.factor(cedata.1$Choice)
cedata.1$LOC <- as.factor(cedata.1$LOC)
cedata.1$SIZE <- as.factor(cedata.1$SIZE)
# estimate model.
glm.model <- glm(Choice ~ LOC*SIZE, data=cedata.1, family = binomial(link = "logit"))
# estimate MEs for use in IE assessment
LOC.SIZE <- ggpredict(glm.model, terms = c("LOC", "SIZE"))
LOC.SIZE
#>
#> # Predicted probabilities of Choice
#> # x = LOC
#>
#> # SIZE = 0
#>
#> x | Predicted | SE | 95% CI
#> -----------------------------------
#> 0 | 0.33 | 1.22 | [0.04, 0.85]
#> 1 | 0.50 | 1.41 | [0.06, 0.94]
#>
#> # SIZE = 1
#>
#> x | Predicted | SE | 95% CI
#> -----------------------------------
#> 0 | 0.67 | 1.22 | [0.15, 0.96]
#> 1 | 0.50 | 1.00 | [0.12, 0.88]
#> Standard errors are on the link-scale (untransformed).
# plot
# plot(LOC.SIZE, connect.lines = TRUE)
This code snippet worked just fine until I decided to update R(3.6.3) and RStudio(1.2.5042) yesterday, though it is not obvious to me that is the source of the problem.
In a nutshell, I convert 91 pdf files into a volatile corpus named Vcorp and confirm that I created a volatile corpus as follows:
> Vcorp <- VCorpus(VectorSource(citiesText))
> class(Vcorp)
[1] "VCorpus" "Corpus"
Then I attempt to import this tm Vcorpus into quanteda, but keep getting an error message, which I did not get before (eg the day before the update).
> data(Vcorp, package = "tm")
> citiesCorpus <- corpus(Vcorp)
Error in data.frame(..., check.names = FALSE) :
arguments imply differing number of rows: 8714, 91
Any suggestions? Thank you.
Impossible to know the exact problem without a) version information on your packages and b) a reproducible example.
Why use tm at all? You could have created a quanteda corpus directly as:
corpus(citiesText)
Converting a VCorpus works fine for me.
library("quanteda")
## Package version: 2.0.1
library("tm")
packageVersion("tm")
## [1] ‘0.7.7’
reut21578 <- system.file("texts", "crude", package = "tm")
VCorp <- VCorpus(
DirSource(reut21578, mode = "binary"),
list(reader = readReut21578XMLasPlain)
)
corpus(VCorp)
## Corpus consisting of 20 documents and 16 docvars.
## text1 :
## "Diamond Shamrock Corp said that effective today it had cut i..."
##
## text2 :
## "OPEC may be forced to meet before a scheduled June session t..."
##
## text3 :
## "Texaco Canada said it lowered the contract price it will pay..."
##
## text4 :
## "Marathon Petroleum Co said it reduced the contract price it ..."
##
## text5 :
## "Houston Oil Trust said that independent petroleum engineers ..."
##
## text6 :
## "Kuwait"s Oil Minister, in remarks published today, said ther..."
##
## [ reached max_ndoc ... 14 more documents ]
I used to use the code below to calculate standardized coefficients of a lmer model. However, with the new version of lme the structure of the returned object has changed.
How to adapt the function stdCoef.lmer to make it work with the new lme4 version?
# Install old version of lme 4
install.packages("lme4.0", type="both",
repos=c("http://lme4.r-forge.r-project.org/repos",
getOption("repos")[["CRAN"]]))
# Load package
detach("package:lme4", unload=TRUE)
library(lme4.0)
# Define function to get standardized coefficients from an lmer
# See: https://github.com/jebyrnes/ext-meta/blob/master/r/lmerMetaPrep.R
stdCoef.lmer <- function(object) {
sdy <- sd(attr(object, "y"))
sdx <- apply(attr(object, "X"), 2, sd)
sc <- fixef(object)*sdx/sdy
#mimic se.ranef from pacakge "arm"
se.fixef <- function(obj) attr(summary(obj), "coefs")[,2]
se <- se.fixef(object)*sdx/sdy
return(list(stdcoef=sc, stdse=se))
}
# Run model
fm0 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
# Get standardized coefficients
stdCoef.lmer(fm0)
# Comparison model with prescaled variables
fm0.comparison <- lmer(scale(Reaction) ~ scale(Days) + (scale(Days) | Subject), sleepstudy)
The answer by #LeonardoBergamini works, but this one is more compact and understandable and only uses standard accessors — less likely to break in the future if/when the structure of the summary() output, or the internal structure of the fitted model, changes.
stdCoef.merMod <- function(object) {
sdy <- sd(getME(object,"y"))
sdx <- apply(getME(object,"X"), 2, sd)
sc <- fixef(object)*sdx/sdy
se.fixef <- coef(summary(object))[,"Std. Error"]
se <- se.fixef*sdx/sdy
return(data.frame(stdcoef=sc, stdse=se))
}
library("lme4")
fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
fixef(fm1)
## (Intercept) Days
## 251.40510 10.46729
stdCoef.merMod(fm1)
## stdcoef stdse
## (Intercept) 0.0000000 0.00000000
## Days 0.5352302 0.07904178
(This does give the same results as stdCoef.lmer in
#LeonardoBergamini's answer ...)
You can get partially scaled coefficients — scaled by 2 times the SD of x, but not scaled by SD(y), and not centred — using broom.mixed::tidy + dotwhisker::by_2sd:
library(broom.mixed)
library(dotwhisker)
(fm1
|> tidy(effect="fixed")
|> by_2sd(data=sleepstudy)
|> dplyr::select(term, estimate, std.error)
)
this should work:
stdCoef.lmer <- function(object) {
sdy <- sd(attr(object, "resp")$y) # the y values are now in the 'y' slot
### of the resp attribute
sdx <- apply(attr(object, "pp")$X, 2, sd) # And the X matriz is in the 'X' slot of the pp attr
sc <- fixef(object)*sdx/sdy
#mimic se.ranef from pacakge "arm"
se.fixef <- function(obj) as.data.frame(summary(obj)[10])[,2] # last change - extracting
## the standard errors from the summary
se <- se.fixef(object)*sdx/sdy
return(data.frame(stdcoef=sc, stdse=se))
}