NetworkXNotImplemented error : connected_components() method - undirected graph - networkx

As I understand connected_components() method in NetworkX should generate components in a given undirected graph (There are strongly_connected_components() and weakly_connected_components() for directed graph). I have generated an undirected graph G and while trying to implement networkx.connected_components(G), I am getting the error NetworkXNotImplemented: not implemented for directed type.
Note: G was the interaction network of users of the Pretty Good Privacy (PGP) algorithm (http://deim.urv.cat/~aarenas/data/welcome.htm). The network is a single giant component.
I have implemented the method for many other undirected networks.
import networkx as nx
G=nx.read_pajek("PGPgiantcompo.net")
C=max(nx.connected_components(G), key=len)
Expected result:
C contains the giant component which is G itself.
Actual result:
NetworkXNotImplemented Traceback (most recent call last)
<ipython-input-1-27a0ad1fa789> in <module>()
---> 27 C=max(nx.connected_components(G), key=len)
28 Giant_frozen=G.subgraph(C)
29 nx.draw(Giant_frozen, with_labels=True, font_weight='bold')
<decorator-gen-295> in connected_components(G)
F:\Anaconda3\lib\site-packages\networkx\utils\decorators.py in _not_implemented_for(not_implement_for_func, *args, **kwargs)
78 if match:
79 msg = 'not implemented for %s type' % ' '.join(graph_types)
---> 80 raise nx.NetworkXNotImplemented(msg)
81 else:
82 return not_implement_for_func(*args, **kwargs)
NetworkXNotImplemented: not implemented for directed type

Related

Is there a way to use spark MLLib CrossValidator without parameter grid?

I want to use cross-validation instead of the normal validation set approach just as a means to get a better estimate of the test error rate. I am using spark-MLLib Dataframe based API. However if I run the following code -
cv = tuning.CrossValidator(estimator=randomForestRegressor, evaluator=evaluator, numFolds=5)
cv_model = cv.fit(vsdf)
I get the error -
KeyError Traceback (most recent call last)
<ipython-input-44-d4e7a9d3602e> in <module>
----> 1 cv_model = cv.fit(vsdf)
C:\Spark\spark-3.1.2-bin-hadoop3.2\python\pyspark\ml\base.py in fit(self, dataset, params)
159 return self.copy(params)._fit(dataset)
160 else:
--> 161 return self._fit(dataset)
162 else:
163 raise ValueError("Params must be either a param map or a list/tuple of param maps, "
C:\Spark\spark-3.1.2-bin-hadoop3.2\python\pyspark\ml\tuning.py in _fit(self, dataset)
667 def _fit(self, dataset):
668 est = self.getOrDefault(self.estimator)
--> 669 epm = self.getOrDefault(self.estimatorParamMaps)
670 numModels = len(epm)
671 eva = self.getOrDefault(self.evaluator)
C:\Spark\spark-3.1.2-bin-hadoop3.2\python\pyspark\ml\param\__init__.py in getOrDefault(self, param)
344 return self._paramMap[param]
345 else:
--> 346 return self._defaultParamMap[param]
347
348 def extractParamMap(self, extra=None):
KeyError: Param(parent='CrossValidator_a9121a59fda3', name='estimatorParamMaps', doc='estimator param maps')
I guess this is because I have not provided any parameter-map to search over. Is there no way to do cross-validation in spark-ml without a parameter grid?
Yes, you can do it. You need to pass an empty parameter grid though.
something like this should work, it will behave like a normal K-fold cross validator.
params_grid = ParamGridBuilder().build()

How can I fix this NotImplementedError in python class module while working in pytorch framework?

Hello everyone I am working on CIFAR10 dataset using pytorch. I have developed a model which works absolutely fine but the main problem occurrs while runing the following code:
import time
start_time=time.time()
epochs=5
train_losses=[]
test_losses=[]
train_correct=[]
test_correct=[]
for i in range(epochs):
tsn_corr=0
tst_corr=0
for b, (X_train,y_train) in enumerate(train_loader):
b+=1
y_pred=model(X_train)
loss=criterion(y_pred,y_train)
#Tally the number of correct predictions
predicted= torch.max(y_pred.data, 1)[1]
batch_corr=(predicted==y_train).sum()
tsn_corr += batch_corr
#optimize paramters
optimizer.zero_grad()
loss.backward()
optimizer.step()
#print interim results
if b%600 == 0:
print(f"epochs: {i}, batch: {b}, loss: {loss.item():10.8f}")
loss=loss.detach().numpy()
train_losses.append(loss)
train_correct.append(tsn_corr)
#Running the test_batches
with torch.no_grad():
for b, (X_test,y_test) in enumerate(test_loader):
b+=1
y_val=model(X_test)
#TALLY THE NUMBER OF CORRECT PREDICTIONS
predicted=torch.max(y_val.data, 1)[1]
batch_corr= (predicted==y_test).sum()
tst_corr += batch_corr
loss=criterion(y_val,y_test)
loss=loss.detach().numpy()
test_losses.append(loss)
test_correct.append(tst_corr)
the following error occurrs while running the following code:
NotImplementedError Traceback (most recent call last)
<ipython-input-43-48e21e83e9f7> in <module>
15 b+=1
16
---> 17 y_pred=model(X_train)
18 loss=criterion(y_pred,y_train)
19
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _forward_unimplemented(self, *input)
199 registered hooks while the latter silently ignores them.
200 """
--> 201 raise NotImplementedError
202
203
NotImplementedError:
Can someone tell me what can I do to fix this code. Apart from this previous all codes work fine and the model which I made using ConvolutionalNeural-Networks also runs successfully meaning that there is no problem with the model. I guess this detail might help. It might be noted that this code works just fine on MNIST dataset. I dont know what is the problem with CIFAR datasets
Your model class needs to implement a forward method. See the PyTorch Example on Subclassing to see an example.

Python Jupyter Notebook scipy

For a long time I was able to add data and fit, then plot the curve with data. But recently I get this:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-6-6f645a2744bc> in <module>
1 poland = prepare_data(europe_data, 'Poland')
----> 2 plot_all(poland, max_y=400000)
3 poland
~/Pulpit/library.py in plot_all(country, max_x, max_y)
43 def plot_all(country, max_x = 1000, max_y = 500000):
44
---> 45 parameters_logistic = scipy.optimize.curve_fit(func_logistic, country['n'], country['all'])[0]
46 parameters_expo = scipy.optimize.curve_fit(func_expo, country['n'], country['all'])[0]
47
/usr/local/lib64/python3.6/site-packages/scipy/optimize/minpack.py in curve_fit(f, xdata, ydata, p0, sigma, absolute_sigma, check_finite, bounds, method, jac, **kwargs)
787 cost = np.sum(infodict['fvec'] ** 2)
788 if ier not in [1, 2, 3, 4]:
--> 789 raise RuntimeError("Optimal parameters not found: " + errmsg)
790 else:
791 # Rename maxfev (leastsq) to max_nfev (least_squares), if specified.
RuntimeError: Optimal parameters not found: Number of calls to function has reached maxfev = 800.
Here are all Python Jupyter Notebook files: https://files.fm/u/zj7cc6ne#sign_up
How to solve this?
scipy.optimize.curve_fit takes a keyword argument p0.
Initial guess for the parameters (length N). If None, then the initial
values will all be 1 (if the number of parameters for the function can
be determined using introspection, otherwise a ValueError is raised).
If the defaults 1 are too far of from the result the algorithm may not converge. Try to put some values that make sense for your problem.

Mismatch in Input Shape of tf.keras model

I am getting a mismatch in input shape of a tf.keras model. The code block is given below with the stack trace. I am using hub.keraslayer as my first layer. The model is being made for being trained using Tensor Flow Federated (TFF). The input to the model are variable length strings. Kindly suggest a way out.
#Making a Tensorflow Model
from tensorflow import keras
def create_keras_model():
encoder = hub.load("https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1")
return tf.keras.models.Sequential([
hub.KerasLayer(encoder, input_shape=[],dtype=tf.string,trainable=True),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dense(1, activation='sigmoid'),
])
def model_fn():
# We _must_ create a new model here, and _not_ capture it from an external
# scope. TFF will call this within different graph contexts.
keras_model = create_keras_model()
return tff.learning.from_keras_model(
keras_model,
input_spec=preprocessed_example_dataset.element_spec,
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.Accuracy()])
iterative_process = tff.learning.build_federated_averaging_process(
model_fn,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02),
server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-
packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling
BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with
constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-
packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling
BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with
constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:Model was constructed with shape (None,) for input
Tensor("keras_layer_input:0", shape=(None,), dtype=string), but it was called on an input
with incompatible shape (None, None).
WARNING:tensorflow:Model was constructed with shape (None,) for input
Tensor("keras_layer_input:0", shape=(None,), dtype=string), but it was called on an input
with incompatible shape (None, None).
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-27-68fa27e65b7e> in <module>()
3 model_fn,
4 client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02),
----> 5 server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))
18 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in
wrapper(*args, **kwargs)
966 except Exception as e: # pylint:disable=broad-except
967 if hasattr(e, "ag_error_metadata"):
--> 968 raise e.ag_error_metadata.to_exception(e)
969 else:
970 raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-
packages/tensorflow_federated/python/learning/federated_averaging.py:91 __call__ *
num_examples_sum = dataset.reduce(
/usr/local/lib/python3.6/dist-
packages/tensorflow_federated/python/learning/model_utils.py:152 forward_pass *
self._model.forward_pass(batch_input, training), model_lib.BatchOutput)
/usr/local/lib/python3.6/dist-
packages/tensorflow_federated/python/learning/keras_utils.py:391 forward_pass *
return self._forward_pass(batch_input, training=training)
/usr/local/lib/python3.6/dist-
packages/tensorflow_federated/python/learning/keras_utils.py:359 _forward_pass *
predictions = self._keras_model(inputs, training=training)
/usr/local/lib/python3.6/dist-packages/tensorflow_hub/keras_layer.py:222 call *
result = f()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py:486
_call_attribute **
return instance.__call__(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py:580 __call__
result = self._call(*args, **kwds)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py:618 _call
results = self._stateful_fn(*args, **kwds)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:2419 __call__
graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:2735
_maybe_define_function
*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:2238
canonicalize_function_inputs
self._flat_input_signature)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:2305
_convert_inputs_to_signature
format_error_message(inputs, input_signature))
ValueError: Python inputs incompatible with input_signature:
inputs: (
Tensor("batch_input:0", shape=(None, None), dtype=string))
input_signature: (
TensorSpec(shape=(None,), dtype=tf.string, name=None))

Torchtext AttributeError: 'Example' object has no attribute 'text_content'

I'm working with RNN and using Pytorch & Torchtext. I've got a problem with building vocab in my RNN. My code is as follows:
TEXT = Field(tokenize=tokenizer, lower=True)
LABEL = LabelField(dtype=torch.float)
trainds = TabularDataset(
path='drive/{}'.format(TRAIN_PATH), format='tsv',
fields=[
('label_start', LABEL),
('label_end', None),
('title', None),
('symbol', None),
('text_content', TEXT),
])
testds = TabularDataset(
path='drive/{}'.format(TEST_PATH), format='tsv',
fields=[
('text_content', TEXT),
])
TEXT.build_vocab(trainds, testds)
When I want to build vocab, I'm getting this annoying error:
AttributeError: 'Example' object has no attribute 'text_content'
I'm sure, that there is no missing text_content attr. I made try-catch in order to display this specific case:
try:
print(len(trainds[i]))
except:
print(trainds[i].text_content)
Surprisingly, I don't get any error and this specific print command shows:
['znana', 'okresie', 'masarni', 'walc', 'y', 'myśl', 'programie', 'sprawy', ...]
So it indicates, that there is text_content attr. When I perform this on a smaller dataset, it works like a charm. This problem occurs when I want to work with proper data. I ran out of ideas. Maybe someone had a similar case and can explain it.
My full traceback:
AttributeError Traceback (most recent call last)
<ipython-input-16-cf31866a07e7> in <module>()
155
156 if __name__ == "__main__":
--> 157 main()
158
<ipython-input-16-cf31866a07e7> in main()
117 break
118
--> 119 TEXT.build_vocab(trainds, testds)
120 print('zbudowano dla text')
121 LABEL.build_vocab(trainds)
/usr/local/lib/python3.6/dist-packages/torchtext/data/field.py in build_vocab(self, *args, **kwargs)
260 sources.append(arg)
261 for data in sources:
--> 262 for x in data:
263 if not self.sequential:
264 x = [x]
/usr/local/lib/python3.6/dist-packages/torchtext/data/dataset.py in __getattr__(self, attr)
152 if attr in self.fields:
153 for x in self.examples:
--> 154 yield getattr(x, attr)
155
156 #classmethod
AttributeError: 'Example' object has no attribute 'text_content'
This problem arises when the fields are not passed in the same order as they are in the csv/tsv file. Order must be same. Also check if no extra or less fields are mentioned than there are in the csv/tsv file..
I had the same problem.
The reason was that some rows in my input csv dataset were empty.