I'm using stargazer to create regression outputs for my bachelor thesis. Due to the structure of my data I have to use clustered models (code below). I'm using the vcovclust command from the multiwaycov package, which works perfectly. However, stargazer does not support it. Do you know another way to create outputs as nice as stargazer does? Or do you know an other package/command to cluster the models, which is suppported by stargazer?
model1.1.2 <- lm(leaflet ~ partisan + as.factor(gender) + age + as.factor(education) + meaning + as.factor(polintrest), data = voxit)
summary(model1.1.2)
#clustering
vcov_clust1.1.2 <- cluster.vcov(model1.1.2, cbind(voxit$id, voxit$projetx))
coeftest(model1.1.2, vcov_clust1.1.2)
You can supply the adjusted p- and se-values to stargazer manually.
# model1 and model2 are both objects returned from coeftest()
# Capture them in an object and extract the ses (2nd column) and ps (4th column) in a list
ses <- list(model1[,2], model2[,2])
ps <- list(model1[,4], model2[,4])
# you can then run your normal stargazer command and supply
# the se- and p-values manually to the stargazer function
stargazer(model1, model2, type = "text", se = ses, p = ps, p.auto = F)
Hope this helps!
Related
I'm very new to caret package and nnet in R. I've done some projects related to ANN with Matlab before, but now I need to work with R and I need some basic help.
My input dataset has 1000 observations (in rows) and 23 variables (in columns). My output has 1000 observations and 12 variables.
Here are some sample data that represent my dataset and might help to understand my problem better:
input = as.data.frame(matrix(sample(1 : 20, 100, replace = TRUE), ncol = 10))
colnames(input) = paste ( "X" , 1:10, sep = "") #10 observations and 10 variables
output = as.data.frame(matrix(sample(1 : 20, 70, replace = TRUE), ncol = 7))
colnames(output) = paste ( "Y" , 1:7, sep = "") #10 observations and 7 variables
#nnet with caret:
net1 = train(output ~., data = input, method= "nnet", maxit = 1000)
When I run the code, I get this error:
error: invalid type (list) for variable 'output'.
I think I have to add all output variables separately (which is very annoying, especially with a lot of variables), like this:
train(output$Y1 + output$Y2 + output$Y3 + output$Y4 + output$Y5 +
output$Y6 + output$Y7 ~., data = input, method= "nnet", maxit = 1000)
This time it runs but I get this error:
Error in [.data.frame(data, , all.vars(Terms), drop = FALSE) :
undefined columns selected
I try to use neuralnet package, with the code below it works perfectly but I still have to add output variables separately :(
net1 = neuralnet(output$Y1 + output$Y2 + output$Y3 + output$Y4 +
output$Y5 + output$Y6 + output$Y7 ~., data = input, hidden=c(2,10))
p.s. since these sample data are created randomly, the neuralnet cannot converge, but in my real data it works well (in comparison to Matlab ANN)
Now, if you could help me with a way to put output variables automatically (not manually), it solves my problem (although with neuralnet not caret).
use the str() function and ascertain that its a data frame looks like you are inputting a list to the train function. This may be because of a transformation you are doing before to output.
str(output)
Without a full script of earlier steps its difficult to understand what is going on.
After trying different things and searches, I finally found a solution:
First, we must use as.formula to show the relation between our input and output. With the code below we don't need to add all the variables separately:
names1 <- colnames(output) #the name of our variables in the output
names2 = colnames(input) #the name of our variables in the input
a <- as.formula(paste(paste(names1,collapse='+', sep = ""),' ~ '
,paste(names2,collapse='+', sep = "")))
then we have to combine our input and output in a single data frame:
all_data = cbind(output, input)
then, use neuralnet like this:
net1 = neuralnet(formula = a, data = all_data, hidden=c(2,10))
plot(net1)
This is also work with the caret package:
net1 = train(a, data = all_data, method= "nnet", maxit = 1000)
but it seems neuralnet works faster (at least in my case).
I hope this helps someone else.
I am having a couple of issues to put this in a functional format.
select from tableName where i=fby[(last;i);([]column_one;column_two)]
This is what I got:
?[tableName;fby;enlist(=;`i;(enlist;last;`i);(+:;(!;enlist`column_one`column_two;(enlist;`column_one;`column_two))));0b;()]
but I get a type error.
Any suggestions?
Consider using the following function, adjust from the buildQuery function given in the whitepaper on Parse Trees. This is a pretty useful tool for quickly developing in q, this version is an improvement on that given in the linked whitepaper, having been extended to handle updates by reference (i.e., update x:3 from `tab)
\c 30 200
tidy:{ssr/[;("\"~~";"~~\"");("";"")] $[","=first x;1_x;x]};
strBrk:{y,(";" sv x),z};
//replace k representation with equivalent q keyword
kreplace:{[x] $[`=qval:.q?x;x;"~~",string[qval],"~~"]};
funcK:{$[0=t:type x;.z.s each x;t<100h;x;kreplace x]};
//replace eg ,`FD`ABC`DEF with "enlist`FD`ABC`DEF"
ereplace:{"~~enlist",(.Q.s1 first x),"~~"};
ereptest:{((0=type x) & (1=count x) & (11=type first x)) | ((11=type x)&(1=count x))};
funcEn:{$[ereptest x;ereplace x;0=type x;.z.s each x;x]};
basic:{tidy .Q.s1 funcK funcEn x};
addbraks:{"(",x,")"};
//where clause needs to be a list of where clauses, so if only one whereclause need to enlist.
stringify:{$[(0=type x) & 1=count x;"enlist ";""],basic x};
//if a dictionary apply to both, keys and values
ab:{$[(0=count x) | -1=type x;.Q.s1 x;99=type x;(addbraks stringify key x),"!",stringify value x;stringify x]};
inner:{[x]
idxs:2 3 4 5 6 inter ainds:til count x;
x:#[x;idxs;'[ab;eval]];
if[6 in idxs;x[6]:ssr/[;("hopen";"hclose");("iasc";"idesc")] x[6]];
//for select statements within select statements
//This line has been adjusted
x[1]:$[-11=type x 1;x 1;$[11h=type x 1;[idxs,:1;"`",string first x 1];[idxs,:1;.z.s x 1]]];
x:#[x;ainds except idxs;string];
x[0],strBrk[1_x;"[";"]"]
};
buildSelect:{[x]
inner parse x
};
We can use this to create the functional query that will work
q)n:1000
q)tab:([]sym:n?`3;col1:n?100.0;col2:n?10.0)
q)buildSelect "select from tab where i=fby[(last;i);([]col1;col2)]"
"?[tab;enlist (=;`i;(fby;(enlist;last;`i);(flip;(lsq;enlist`col1`col2;(enlist;`col1;`col2)))));0b;()]"
So we have the following as the functional form
?[tab;enlist (=;`i;(fby;(enlist;last;`i);(flip;(lsq;enlist`col1`col2;(enlist;`col1;`col2)))));0b;()]
// Applying this
q)?[tab;enlist (=;`i;(fby;(enlist;last;`i);(flip;(lsq;enlist`col1`col2;(enlist;`col1;`col2)))));0b;()]
sym col1 col2
----------------------
bah 18.70281 3.927524
jjb 35.95293 5.170911
ihm 48.09078 5.159796
...
Glad you were able to fix your problem with converting your query to functional form.
Generally it is the case that when you use parse with a fby in your statement, q will convert this function into its k definition. Usually you should just be able to replace this k code with the q function itself (i.e. change (k){stuff} to fby) and this should run properly when turning the query into functional form.
Additionally, if you check out https://code.kx.com/v2/wp/parse-trees/ it goes into more detail about parse trees and functional form. Additionally, it contains a script called buildQuery which will return the functional form of the query of interest as a string which can be quite handy and save time when a functional form is complex.
I actually got it myself ->
?[tableName;((=;`i;(fby;(enlist;last;`i);(+:;(!;enlist`column_one`column_two;(enlist;`column_one;`column_two)))));(in;`venue;enlist`venueone`venuetwo));0b;()]
The issues was a () missing from the statement. Works fine now.
**if someone wants to add a more detailed explanation on how manual parse trees are built and how the generic (k){} function can be replaced with the actual function in q feel free to add your answer and I'll accept and upvote it
Is there a way to update a subset of parameters in dynet? For instance in the following toy example, first update h1, then h2:
model = ParameterCollection()
h1 = model.add_parameters((hidden_units, dims))
h2 = model.add_parameters((hidden_units, dims))
...
for x in trainset:
...
loss.scalar_value()
loss.backward()
trainer.update(h1)
renew_cg()
for x in trainset:
...
loss.scalar_value()
loss.backward()
trainer.update(h2)
renew_cg()
I know that update_subset interface exists for this and works based on the given parameter indexes. But then it is not documented anywhere how we can get the parameter indexes in dynet Python.
A solution is to use the flag update = False when creating expressions for parameters (including lookup parameters):
import dynet as dy
import numpy as np
model = dy.Model()
pW = model.add_parameters((2, 4))
pb = model.add_parameters(2)
trainer = dy.SimpleSGDTrainer(model)
def step(update_b):
dy.renew_cg()
x = dy.inputTensor(np.ones(4))
W = pW.expr()
# update b?
b = pb.expr(update = update_b)
loss = dy.pickneglogsoftmax(W * x + b, 0)
loss.backward()
trainer.update()
# dy.renew_cg()
print(pb.as_array())
print(pW.as_array())
step(True)
print(pb.as_array()) # b updated
print(pW.as_array())
step(False)
print(pb.as_array()) # b not updated
print(pW.as_array())
For update_subset, I would guess that the indices are the integers suffixed at the end of parameter names (.name()).
In the doc, we are supposed to use a get_index function.
Another option is: dy.nobackprop() which prevents the gradient to propagate beyond a certain node in the graph.
And yet another option is to zero the gradient of the parameter that do not need to be updated (.scale_gradient(0)).
These methods are equivalent to zeroing the gradient before the update. So, the parameter will still be updated if the optimizer uses its momentum from previous training steps (MomentumSGDTrainer, AdamTrainer, ...).
I am doing a small script to get SNMP traps with PySnmp.
I am able to get the oid = value pairs, but the value is too long with a small information in the end. How can I access the octectstring only which comes in the end of the value. Is there a way other than string manipulations? Please comment.
OID =_BindValue(componentType=NamedTypes(NamedType('value', ObjectSyntax------------------------------------------------(DELETED)-----------------(None, OctetString(b'New Alarm'))))
Is it possible to get the output like the following, as is available from another SNMP client:
.iso.org.dod.internet.private.enterprises.xxxx.1.1.2.2.14: CM_DAS Alarm Traps:
Edit - the codes are :
**for oid, val in varBinds:
print('%s = %s' % (oid.prettyPrint(), val.prettyPrint()))
target.write(str(val))**
On screen, it shows short, but on file, the val is so long.
Usage of target.write( str(val[0][1][2])) does not work for all (program stops with error), but the 1st oid(time tick) gets it fine.
How can I get the value from tail as the actual value is found there for all oids.
Thanks.
SNMP transfers information in form of a sequence of OID-value pairs called variable-bindings:
variable_bindings = [[oid1, value1], [oid2, value2], ...]
Once you get the variable-bindings sequence from SNMP PDU, to access value1, for example, you might do:
variable_binding1 = variable_bindings[0]
value1 = variable_binding1[1]
To access the tail part of value1 (assuming it's a string) you could simply subscribe it:
tail_of_value1 = value1[-10:]
I guess in your question you operate on a single variable_binding, not a sequence of them.
If you want pysnmp to translate oid-value pair into a human-friendly representation (of MIB object name, MIB object value), you'd have to pass original OID-value pair to the ObjectType class and run it through MIB resolver as explained in the documentation.
Thanks...
the following codes works like somwwhat I was looking for.
if str(oid)=="1.3.6.1.2.1.1.3.0":
target.write(" = str(val[0][1]['timeticks-value']) = " +str(val[0][1]['timeticks-value'])) # time ticks
else:
target.write("= val[0][0]['string-value']= " + str(val[0][0]['string-value']))
[This is my sample data.]
What I had been trying to achieve is Forecasting in R with dates as CSV input via R studio.
When I've tried to change the data type of the Data field in my input using as.Date(my_date_field, "%Y-%m-%d"), Class(my_date_field) results in Date only but printing the Values of my_date_field results in "NA"s.
So, I am unable to forecast on timeline basis at all.
Please help me out sorting out this issue.
The code I've used for forecasting is:
library(forecast)
library(lubridate)
FitData <- read.csv("~//Power BI//fit.csv")
Fitdataset <- aggregate(FitData$Metric ~ FitData$PED, data = FitData, FUN= sum)
Fitdataset$FitData$PED<- as.Date(Fitdataset$FitData$PED, format="%y-%d-%m")
ts_FitData <- ts(Fitdataset$FitData$Metric, frequency=12, start=c(Fitdataset$FitData$PED`1,1))
decom <- stl(ts_FitData, s.window = "periodic")
pred <- forecast(decom, h = 7)
plot(pred)
`