I'm following the PyTorch Forecasting tutorial: https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/building.html
I implemented a LSTM using AutoRegressiveBaseModelWithCovariates and to initialized the model from my dataset.
from pytorch_forecasting.models.rnn import RecurrentNetwork
...
model = RecurrentNetwork.from_dataset(dataset_with_covariates)
I've been asked to get the output of a hidden layer and visualize w tSNE or UMAP (something I've done before with Keras). I'm new to PyTorch unfortunately. Does anyone know how to do this?
Here's the summary.
| Name | Type | Params
----------------------------------------------------------------------------------
0 | loss | MAE | 0
1 | logging_metrics | ModuleList | 0
2 | logging_metrics.0 | SMAPE | 0
3 | logging_metrics.1 | MAE | 0
4 | logging_metrics.2 | RMSE | 0
5 | logging_metrics.3 | MAPE | 0
6 | logging_metrics.4 | MASE | 0
7 | embeddings | MultiEmbedding | 47
8 | embeddings.embeddings | ModuleDict | 47
9 | embeddings.embeddings.level_0 | Embedding | 12
10 | embeddings.embeddings.supervisorvehiclestatus | Embedding | 35
11 | rnn | LSTM | 2.5 K
12 | output_projector | Linear | 11
----------------------------------------------------------------------------------
2.5 K Trainable params
0 Non-trainable params
2.5 K Total params
0.010 Total estimated model params size (MB)
In attempt to find the layer name, I did:
for name, layer in model.named_modules():
print(name, layer)
RecurrentNetwork(
(loss): MAE()
(logging_metrics): ModuleList(
(0): SMAPE()
(1): MAE()
(2): RMSE()
(3): MAPE()
(4): MASE()
)
(embeddings): MultiEmbedding(
(embeddings): ModuleDict(
(group_name): Embedding(4, 3)
(categorical_var ): Embedding(7, 5)
)
)
(rnn): LSTM(28, 10, num_layers=2, batch_first=True, dropout=0.1)
(output_projector): Linear(in_features=10, out_features=1, bias=True)
)
loss MAE()
logging_metrics ModuleList(
(0): SMAPE()
(1): MAE()
(2): RMSE()
(3): MAPE()
(4): MASE()
)
logging_metrics.0 SMAPE()
logging_metrics.1 MAE()
logging_metrics.2 RMSE()
logging_metrics.3 MAPE()
logging_metrics.4 MASE()
embeddings MultiEmbedding(
(embeddings): ModuleDict(
(group_name): Embedding(4, 3)
(categorical_var ): Embedding(7, 5)
)
)
embeddings.embeddings ModuleDict(
(group_name): Embedding(4, 3)
(categorical_var ): Embedding(7, 5)
)
embeddings.embeddings.level_0 Embedding(4, 3)
embeddings.embeddings.categorical_var Embedding(7, 5)
rnn LSTM(28, 10, num_layers=2, batch_first=True, dropout=0.1)
output_projector Linear(in_features=10, out_features=1, bias=True)
I thought I could do something like this to get the activations, but it is not working.
def get_hidden_features(x, layer):
activation = {}
def get_activation(name):
def hook(m, i, o):
activation[name] = o.detach()
return hook
model.register_forward_hook(get_activation(layer))
_ = model(x)
return activation[layer]
outhidden = get_hidden_features(x, "rnn")
Returns:
AttributeError: 'Output' object has no attribute 'detach'
Related
`Reservation_branch_code | ON_ACCOUNT | Rcount
:-------------------------------------------------:
0 1101 | 170 | 5484
1 1103 | 101 | 5111
2 1118 | 1 | 232
3 1121 | 0 | 27
4 1126 | 90 | 191`
would like to chart sorted by "Rcount" and x axis is "Reservation_branch_code"
this below code gives me chart without Sorting by Rcount
base =alt.Chart(df1).transform_fold(
['Rcount', 'ON_ACCOUNT'],
as_=['column', 'value']
)
bars = base.mark_bar().encode(
# x='Reservation_branch_code:N',
x='Reservation_branch_code:O',
y=alt.Y('value:Q', stack=None), # stack =None enables layered bar
color=alt.Color('column:N', scale=alt.Scale(range=["#f50520", "#bab6b7"])),
tooltip=alt.Tooltip(['ON_ACCOUNT','Rcount']),
#order=alt.Order('value:Q')
)
text = base.mark_text(
align='center',
color='black',
baseline='middle',
dx=0,dy=-8, # Nudges text to right so it doesn't appear on top of the bar
).encode(
x='Reservation_branch_code:N',
y='value:Q',
text=alt.Text('value:Q', format='.0f')
)
rule = alt.Chart(df1).mark_rule(color='blue').encode(
y='mean(Rcount):Q'
)
(bars+text+rule).properties(width=790,height=330)
i sorted data in dataframe...used in that df in altair chart
but not found X axis is not sorted by Rcount column........Thanks
You can pass a list with the sort order:
import altair as alt
from vega_datasets import data
source = data.barley()
alt.Chart(source).mark_bar().encode(
x='sum(yield):Q',
y=alt.Y('site:N', sort=source['site'].unique().tolist())
)
I'm running a K-means clustering model, and I want to analyse the cluster centroids, however the centers output is a LIST of my 20 centroids, with their coordinates (8 each) as an ARRAY. I need it as a dataframe, with clusters 1:20 as rows, and their attribute values (centroid coordinates) as columns like so:
c1 | 0.85 | 0.03 | 0.01 | 0.00 | 0.12 | 0.01 | 0.00 | 0.12
c2 | 0.25 | 0.80 | 0.10 | 0.00 | 0.12 | 0.01 | 0.00 | 0.77
c3 | 0.05 | 0.10 | 0.00 | 0.82 | 0.00 | 0.00 | 0.22 | 0.00
The dataframe format is important because what I WANT to do is:
For each centroid
Identify the 3 strongest attributes
Create a "name" for each of the 20 centroids that is a concatenation of the 3 most dominant traits in that centroid
For example:
c1 | milk_eggs_cheese
c2 | meat_milk_bread
c3 | toiletries_bread_eggs
This code is running in Zeppelin, EMR version 5.19, Spark2.4. The model works great, but this is the boilerplate code from the Spark documentation (https://spark.apache.org/docs/latest/ml-clustering.html#k-means), which produces the list of arrays output that I can't really use.
centers = model.clusterCenters()
print("Cluster Centers: ")
for center in centers:
print(center)
This is an excerpt of the output I get.
Cluster Centers:
[0.12391775 0.04282062 0.00368751 0.27282358 0.00533401 0.03389095
0.04220946 0.03213536 0.00895981 0.00990327 0.01007891]
[0.09018751 0.01354349 0.0130329 0.00772877 0.00371508 0.02288211
0.032301 0.37979978 0.002487 0.00617438 0.00610262]
[7.37626746e-02 2.02469798e-03 4.00944473e-04 9.62304581e-04
5.98964859e-03 2.95190585e-03 8.48736175e-01 1.36797882e-03
2.57451073e-04 6.13320072e-04 5.70559278e-04]
Based on How to convert a list of array to Spark dataframe I have tried this:
df = sc.parallelize(centers).toDF(['fresh_items', 'wine_liquor', 'baby', 'cigarettes', 'fresh_meat', 'fruit_vegetables', 'bakery', 'toiletries', 'pets', 'coffee', 'cheese'])
df.show()
But this throws the following error:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
model.clusterCenters() gives you a list of numpy arrays and not a list of lists like in the answer you have linked. Just convert the numpy arrays to a lists before creating the dataframe:
bla = [e.tolist() for e in centers]
df = sc.parallelize(bla).toDF(['fresh_items', 'wine_liquor', 'baby', 'cigarettes', 'fresh_meat', 'fruit_vegetables', 'bakery', 'toiletries', 'pets', 'coffee', 'cheese'])
#or df = spark.createDataFrame(bla, ['fresh_items', 'wine_liquor', 'baby', 'cigarettes', 'fresh_meat', 'fruit_vegetables', 'bakery', 'toiletries', 'pets', 'coffee', 'cheese']
df.show()
I have 2 questions concerning estat vif to test multicollinearity:
Is it correct that you can only calculate estat vif after the regress command?
If I execute this command Stata only gives me the vif of one independent variable.
How do I get the vif of all the independent variables?
Q1. I find estat vif documented under regress postestimation. If you can find it documented under any other postestimation heading, then it is applicable after that command.
Q2. You don't give any examples, reproducible or otherwise, of your problem. But estat vif by default gives a result for each predictor (independent variable).
. sysuse auto, clear
(1978 Automobile Data)
. regress mpg weight price
Source | SS df MS Number of obs = 74
-------------+---------------------------------- F(2, 71) = 66.85
Model | 1595.93249 2 797.966246 Prob > F = 0.0000
Residual | 847.526967 71 11.9369995 R-squared = 0.6531
-------------+---------------------------------- Adj R-squared = 0.6434
Total | 2443.45946 73 33.4720474 Root MSE = 3.455
------------------------------------------------------------------------------
mpg | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
weight | -.0058175 .0006175 -9.42 0.000 -.0070489 -.0045862
price | -.0000935 .0001627 -0.57 0.567 -.000418 .0002309
_cons | 39.43966 1.621563 24.32 0.000 36.20635 42.67296
------------------------------------------------------------------------------
. estat vif
Variable | VIF 1/VIF
-------------+----------------------
price | 1.41 0.709898
weight | 1.41 0.709898
-------------+----------------------
Mean VIF | 1.41
Hi, I am trying to load RDD data to a Cassandra Column family using Scala. Out of a total 50 rows , only 28 are getting stored into cassandra table.
Below is the Code snippet:
val states = sc.textFile("state.txt")
//list o fall the 50 states of the USA
var n =0 // corrected to var
val statesRDD = states.map{a =>
n=n+1
(n, a)
}
scala> statesRDD.count
res2: Long = 50
cqlsh:brs> CREATE TABLE BRS.state(state_id int PRIMARY KEY, state_name text);
statesRDD.saveToCassandra("brs","state", SomeColumns("state_id","state_name"))
// this statement saves only 28 rows out of 50, not sure why!!!!
cqlsh:brs> select * from state;
state_id | state_name
----------+-------------
23 | Minnesota
5 | California
28 | Nevada
10 | Georgia
16 | Kansas
13 | Illinois
11 | Hawaii
1 | Alabama
19 | Maine
8 | Oklahoma
2 | Alaska
4 | New York
18 | Virginia
15 | Iowa
22 | Wyoming
27 | Nebraska
20 | Maryland
7 | Ohio
6 | Colorado
9 | Florida
14 | Indiana
26 | Montana
21 | Wisconsin
17 | Vermont
24 | Mississippi
25 | Missouri
12 | Idaho
3 | Arizona
(28 rows)
Can anyone please help me in finding where the issue is?
Edit:
I understood why only 28 rows are getting stored in Cassandra, it's because I have made the first column a PRIMARY KEY and It looks like in my code, n is incremented maximum to 28 and then it starts again with 1 till 22 (total 50).
val states = sc.textFile("states.txt")
var n =0
var statesRDD = states.map{a =>
n+=1
(n, a)
}
I tried making n an accumulator variable as well(viz. val n = sc.accumulator(0,"Counter")), but I don't see any differnce in the output.
scala> statesRDD.foreach(println)
[Stage 2:> (0 + 0) / 2]
(1,New Hampshire)
(2,New Jersey)
(3,New Mexico)
(4,New York)
(5,North Carolina)
(6,North Dakota)
(7,Ohio)
(8,Oklahoma)
(9,Oregon)
(10,Pennsylvania)
(11,Rhode Island)
(12,South Carolina)
(13,South Dakota)
(14,Tennessee)
(15,Texas)
(16,Utah)
(17,Vermont)
(18,Virginia)
(19,Washington)
(20,West Virginia)
(21,Wisconsin)
(22,Wyoming)
(1,Alabama)
(2,Alaska)
(3,Arizona)
(4,Arkansas)
(5,California)
(6,Colorado)
(7,Connecticut)
(8,Delaware)
(9,Florida)
(10,Georgia)
(11,Hawaii)
(12,Idaho)
(13,Illinois)
(14,Indiana)
(15,Iowa)
(16,Kansas)
(17,Kentucky)
(18,Louisiana)
(19,Maine)
(20,Maryland)
(21,Massachusetts)
(22,Michigan)
(23,Minnesota)
(24,Mississippi)
(25,Missouri)
(26,Montana)
(27,Nebraska)
(28,Nevada)
I am curious to know what is causing n to not getting updated after value 28? Also, what are the ways in which I can create a counter which I can use for creating RDD?
There are some misconceptions about distributed systems embedded inside your question. The real heart of this is "How do I have a counter in a distributed system?"
The short answer is you don't. For example what you've done in your code example originally is something like this.
Task One {
var x = 0
record 1: x = 1
record 2: x = 2
}
Task Two {
var x = 0
record 20: x = 1
record 21: x = 2
}
Each machine is independently creating a new x variable set at 0 which gets incremented within it's own context, independently over the other nodes.
For most use cases the "counter" question can be replaced with "How can I get a Unique Identifier per Record in a distributed system?"
For this most users end up using a UUID which can be generated on independent machines with infinitesimal chances of conflicts.
If the question can be "How can I get a monotonic increasing unique indentifier?"
Then you can use zipWithUniqueIndex which will not count but will generate monotonically increasing ids.
If you just want them number to start with it's best to do it on the local system.
Edit; Why can't I use an accumulator?
Accumulators store their state (surprise) per task. You can see this with a little example:
val x = sc.accumulator(0, "x")
sc.parallelize(1 to 50).foreachPartition{ it => it.foreach(y => x+= 1); println(x)}
/*
6
7
6
6
6
6
6
7
*/
x.value
// res38: Int = 50
The accumulators combine their state after finishing their tasks, which means you can't use them as a global distributed counter.
Is it possible to copy a labeled categorical variable in a single line, or do I generally have to copy over labels as a separate step?
In the case I'm looking at, egen ... group() comes close, but changes the underlying integers:
sysuse auto
** starts them from different indices
egen mycut = cut(mpg), at(0 20 30 50) label icodes
egen mycut_copy = group(mycut), label
** does weird stuff
egen mycut2 = cut(mpg), at(0 20 30 50) label icodes
replace mycut2 = group(mycut2)
egen mycut_copy2 = group(mycut2), label
** the correct approach?
gen mycut3 = cut(mpg), at(0 20 30 50) label icodes
gen mycut_copy3 = mycut3
label values mycut_copy3 mycut3
You can do what you want very easily using the less-known clonevar command:
sysuse auto, clear
egen mycut = cut(mpg), at(0 20 30 50) label icodes
clonevar mycut2 = mycut
list mycut* in 1/10, separator(0)
+----------------+
| mycut mycut2 |
|----------------|
1. | 20- 20- |
2. | 0- 0- |
3. | 20- 20- |
4. | 20- 20- |
5. | 0- 0- |
6. | 0- 0- |
7. | 20- 20- |
8. | 20- 20- |
9. | 0- 0- |
10. | 0- 0- |
+----------------+
Note that group() refers to different functions when used with generate and egen, which is why you do not get the same results.