How can I fix this NotImplementedError in python class module while working in pytorch framework? - class

Hello everyone I am working on CIFAR10 dataset using pytorch. I have developed a model which works absolutely fine but the main problem occurrs while runing the following code:
import time
start_time=time.time()
epochs=5
train_losses=[]
test_losses=[]
train_correct=[]
test_correct=[]
for i in range(epochs):
tsn_corr=0
tst_corr=0
for b, (X_train,y_train) in enumerate(train_loader):
b+=1
y_pred=model(X_train)
loss=criterion(y_pred,y_train)
#Tally the number of correct predictions
predicted= torch.max(y_pred.data, 1)[1]
batch_corr=(predicted==y_train).sum()
tsn_corr += batch_corr
#optimize paramters
optimizer.zero_grad()
loss.backward()
optimizer.step()
#print interim results
if b%600 == 0:
print(f"epochs: {i}, batch: {b}, loss: {loss.item():10.8f}")
loss=loss.detach().numpy()
train_losses.append(loss)
train_correct.append(tsn_corr)
#Running the test_batches
with torch.no_grad():
for b, (X_test,y_test) in enumerate(test_loader):
b+=1
y_val=model(X_test)
#TALLY THE NUMBER OF CORRECT PREDICTIONS
predicted=torch.max(y_val.data, 1)[1]
batch_corr= (predicted==y_test).sum()
tst_corr += batch_corr
loss=criterion(y_val,y_test)
loss=loss.detach().numpy()
test_losses.append(loss)
test_correct.append(tst_corr)
the following error occurrs while running the following code:
NotImplementedError Traceback (most recent call last)
<ipython-input-43-48e21e83e9f7> in <module>
15 b+=1
16
---> 17 y_pred=model(X_train)
18 loss=criterion(y_pred,y_train)
19
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _forward_unimplemented(self, *input)
199 registered hooks while the latter silently ignores them.
200 """
--> 201 raise NotImplementedError
202
203
NotImplementedError:
Can someone tell me what can I do to fix this code. Apart from this previous all codes work fine and the model which I made using ConvolutionalNeural-Networks also runs successfully meaning that there is no problem with the model. I guess this detail might help. It might be noted that this code works just fine on MNIST dataset. I dont know what is the problem with CIFAR datasets

Your model class needs to implement a forward method. See the PyTorch Example on Subclassing to see an example.

Related

Is there a way to use spark MLLib CrossValidator without parameter grid?

I want to use cross-validation instead of the normal validation set approach just as a means to get a better estimate of the test error rate. I am using spark-MLLib Dataframe based API. However if I run the following code -
cv = tuning.CrossValidator(estimator=randomForestRegressor, evaluator=evaluator, numFolds=5)
cv_model = cv.fit(vsdf)
I get the error -
KeyError Traceback (most recent call last)
<ipython-input-44-d4e7a9d3602e> in <module>
----> 1 cv_model = cv.fit(vsdf)
C:\Spark\spark-3.1.2-bin-hadoop3.2\python\pyspark\ml\base.py in fit(self, dataset, params)
159 return self.copy(params)._fit(dataset)
160 else:
--> 161 return self._fit(dataset)
162 else:
163 raise ValueError("Params must be either a param map or a list/tuple of param maps, "
C:\Spark\spark-3.1.2-bin-hadoop3.2\python\pyspark\ml\tuning.py in _fit(self, dataset)
667 def _fit(self, dataset):
668 est = self.getOrDefault(self.estimator)
--> 669 epm = self.getOrDefault(self.estimatorParamMaps)
670 numModels = len(epm)
671 eva = self.getOrDefault(self.evaluator)
C:\Spark\spark-3.1.2-bin-hadoop3.2\python\pyspark\ml\param\__init__.py in getOrDefault(self, param)
344 return self._paramMap[param]
345 else:
--> 346 return self._defaultParamMap[param]
347
348 def extractParamMap(self, extra=None):
KeyError: Param(parent='CrossValidator_a9121a59fda3', name='estimatorParamMaps', doc='estimator param maps')
I guess this is because I have not provided any parameter-map to search over. Is there no way to do cross-validation in spark-ml without a parameter grid?
Yes, you can do it. You need to pass an empty parameter grid though.
something like this should work, it will behave like a normal K-fold cross validator.
params_grid = ParamGridBuilder().build()

AssertionError Minizinc for Python

I got this error when trying out the sample code (https://minizinc-python.readthedocs.io/en/latest/getting_started.html) of the minizinc web.
from minizinc import Instance, Model, Solver
# Load n-Queens model from file
nqueens = Model("./nqueens.mzn")
# Find the MiniZinc solver configuration for Gecode
gecode = Solver.lookup("gecode")
# Create an Instance of the n-Queens model for Gecode
instance = Instance(gecode, nqueens)
# Assign 4 to n
instance["n"] = 4
result = instance.solve()
# Output the array q
print(result["q"])
The error I got was:
AssertionError Traceback (most recent call last)
<ipython-input-1-a64f1a5182f8> in <module>
2
3 # Load n-Queens model from file
----> 4 nqueens = Model("./nqueens.mzn")
5 # Find the MiniZinc solver configuration for Gecode
6 gecode = Solver.lookup("gecode")
C:\ProgramData\Anaconda3\lib\site-packages\minizinc\model.py in __init__(self, files)
85 self._lock = threading.Lock()
86 if isinstance(files, Path) or isinstance(files, str):
---> 87 self.add_file(files)
88 elif files is not None:
89 for file in files:
C:\ProgramData\Anaconda3\lib\site-packages\minizinc\model.py in add_file(self, file, parse_data)
159 if not isinstance(file, Path):
160 file = Path(file)
--> 161 assert file.exists()
162 if not parse_data:
163 with self._lock:
AssertionError:
I've downloaded both minizinc and python. I tried using jupyternotebook and spyder, but they both had the same issue.
If anyone has faced the same issue and fixed the problem I'll appreciate any feedback regarding this problem.

How to use `client.start_ipython_workers()` in dask-distributed?

I am trying to get workers to output some information from their ipython kernel and execute various commands in the ipython session. I tried the examples in the documentation and the ipyparallel example works, but not the second example (with ipython magics). I cannot get the workers to execute any commands. For example, I am stuck on the following issue:
from dask.distributed import Client
client = Client()
info = client.start_ipython_workers()
list_workers = info.keys()
%remote info[list_workers[0]]
The last line returns an error:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-19-9118451af441> in <module>
----> 1 get_ipython().run_line_magic('remote', "info['tcp://127.0.0.1:50497'] worker.active")
~/miniconda/envs/dask/lib/python3.7/site-packages/IPython/core/interactiveshell.py in run_line_magic(self, magic_name, line, _stack_depth)
2334 kwargs['local_ns'] = self.get_local_scope(stack_depth)
2335 with self.builtin_trap:
-> 2336 result = fn(*args, **kwargs)
2337 return result
2338
~/miniconda/envs/dask/lib/python3.7/site-packages/distributed/_ipython_utils.py in remote_magic(line, cell)
115 info_name = split_line[0]
116 if info_name not in ip.user_ns:
--> 117 raise NameError(info_name)
118 connection_info = dict(ip.user_ns[info_name])
119
NameError: info['tcp://127.0.0.1:50497']
I would appreciate any examples of how to get any information from the ipython kernel running on workers.
Posting here just for keeping track, I raised an issue for this on GitHub: https://github.com/dask/distributed/issues/4522

NetworkXNotImplemented error : connected_components() method - undirected graph

As I understand connected_components() method in NetworkX should generate components in a given undirected graph (There are strongly_connected_components() and weakly_connected_components() for directed graph). I have generated an undirected graph G and while trying to implement networkx.connected_components(G), I am getting the error NetworkXNotImplemented: not implemented for directed type.
Note: G was the interaction network of users of the Pretty Good Privacy (PGP) algorithm (http://deim.urv.cat/~aarenas/data/welcome.htm). The network is a single giant component.
I have implemented the method for many other undirected networks.
import networkx as nx
G=nx.read_pajek("PGPgiantcompo.net")
C=max(nx.connected_components(G), key=len)
Expected result:
C contains the giant component which is G itself.
Actual result:
NetworkXNotImplemented Traceback (most recent call last)
<ipython-input-1-27a0ad1fa789> in <module>()
---> 27 C=max(nx.connected_components(G), key=len)
28 Giant_frozen=G.subgraph(C)
29 nx.draw(Giant_frozen, with_labels=True, font_weight='bold')
<decorator-gen-295> in connected_components(G)
F:\Anaconda3\lib\site-packages\networkx\utils\decorators.py in _not_implemented_for(not_implement_for_func, *args, **kwargs)
78 if match:
79 msg = 'not implemented for %s type' % ' '.join(graph_types)
---> 80 raise nx.NetworkXNotImplemented(msg)
81 else:
82 return not_implement_for_func(*args, **kwargs)
NetworkXNotImplemented: not implemented for directed type

Torchtext AttributeError: 'Example' object has no attribute 'text_content'

I'm working with RNN and using Pytorch & Torchtext. I've got a problem with building vocab in my RNN. My code is as follows:
TEXT = Field(tokenize=tokenizer, lower=True)
LABEL = LabelField(dtype=torch.float)
trainds = TabularDataset(
path='drive/{}'.format(TRAIN_PATH), format='tsv',
fields=[
('label_start', LABEL),
('label_end', None),
('title', None),
('symbol', None),
('text_content', TEXT),
])
testds = TabularDataset(
path='drive/{}'.format(TEST_PATH), format='tsv',
fields=[
('text_content', TEXT),
])
TEXT.build_vocab(trainds, testds)
When I want to build vocab, I'm getting this annoying error:
AttributeError: 'Example' object has no attribute 'text_content'
I'm sure, that there is no missing text_content attr. I made try-catch in order to display this specific case:
try:
print(len(trainds[i]))
except:
print(trainds[i].text_content)
Surprisingly, I don't get any error and this specific print command shows:
['znana', 'okresie', 'masarni', 'walc', 'y', 'myśl', 'programie', 'sprawy', ...]
So it indicates, that there is text_content attr. When I perform this on a smaller dataset, it works like a charm. This problem occurs when I want to work with proper data. I ran out of ideas. Maybe someone had a similar case and can explain it.
My full traceback:
AttributeError Traceback (most recent call last)
<ipython-input-16-cf31866a07e7> in <module>()
155
156 if __name__ == "__main__":
--> 157 main()
158
<ipython-input-16-cf31866a07e7> in main()
117 break
118
--> 119 TEXT.build_vocab(trainds, testds)
120 print('zbudowano dla text')
121 LABEL.build_vocab(trainds)
/usr/local/lib/python3.6/dist-packages/torchtext/data/field.py in build_vocab(self, *args, **kwargs)
260 sources.append(arg)
261 for data in sources:
--> 262 for x in data:
263 if not self.sequential:
264 x = [x]
/usr/local/lib/python3.6/dist-packages/torchtext/data/dataset.py in __getattr__(self, attr)
152 if attr in self.fields:
153 for x in self.examples:
--> 154 yield getattr(x, attr)
155
156 #classmethod
AttributeError: 'Example' object has no attribute 'text_content'
This problem arises when the fields are not passed in the same order as they are in the csv/tsv file. Order must be same. Also check if no extra or less fields are mentioned than there are in the csv/tsv file..
I had the same problem.
The reason was that some rows in my input csv dataset were empty.