Hi I'm trying to use Brill Tagger to tag a set of sentences. But when running the following ,
=======================
Training a Brill Tagger
>>> default_tagger = DefaultTagger('NN')
>>> initial_tagger = backoff_tagger(train_sents, [UnigramTagger, BigramTagger, TrigramTagger], backoff=default_tagger)
>>> initial_tagger.evaluate(test_sents)
0.8806820634578028
>>> from tag_util import train_brill_tagger
>>> brill_tagger = train_brill_tagger(initial_tagger, train_sents)
>>> brill_tagger.evaluate(test_sents)
0.8827541549751781
I'm getting the following error.
NameError: name 'backoff_tagger' is not defined
what are the causes for this. Do I need to import something
I think you need to import the following:
from tag_util import backoff_tagger
Related
So i had a much bigger problem ,but i sorted that out. Now, i have this error command:
pygame 2.0.1 (SDL 2.0.14, Python 3.7.9)
Hello from the pygame community. https://www.pygame.org/contribute.html
Traceback (most recent call last):
File "c:/Users/danku/jarvis.py", line 16, in
engine = pyttsx3.init('sapi5')
AttributeError: module 'pyttsx3' has no attribute 'init'
i tried kinda everything(installed pygame,pypiwin32,pywintypes) but i cant figure it out. Here is my beloved code (dont laugh its jarvis code):
#alap
import pyttsx3
import datetime
import speech_recognition as sr
import wikipedia
import webbrowser
import os
import pywhatkit
import pyjokes
import subprocess
import pywintypes
import win32com.client
import pygame
engine = pyttsx3.init('sapi5')
def speak(audio):
engine.say(audio)
engine.runAndWait()
def time():
Time = datetime.datetime.now().strftime("%H:%M:%S")
speak(Time)
def date():
year = int(datetime.datetime.now().year)
month = int(datetime.datetime.now().month)
date = int(datetime.datetime.now().day)
speak(date)
speak(month)
speak(year)
def wishme():
speak("Welcome back sir! All system are ready for work!")
speak("the current time is")
time()
speak("The current date is")
date()
hour = datetime.datetime.now().hour
if hour >= 6 and hour<12:
speak("Good morning sir!")
elif hour >=12 and hour<18:
speak("Good afternoon sir!")
elif hour >=18 and hour<24:
speak("Good evening sir!")
else:
speak("Good night sir!")
speak("Jarvis at your service. Please tell me how can i help you?")
def takeCommand():
r = sr.Recognizer()
with sr.Microphone() as source:
print("listening...")
r.pause_threshold = 1
audio = r.listen(source)
try:
print("Recognizing...")
query = r.recognize_google(audio, language='en-US')
print(query)
except Exception as e:
print(e)
speak("Say that again")
return "none"
return query
if __name__ == "__main__":
wishme()
while True:
query = takeCommand().lower()
if 'wikipedia' in query: #if wikipedia found in the query then this block will be executed
speak('Searching Wikipedia...')
query = query.replace("wikipedia", "")
results = wikipedia.summary(query, sentences=2)
speak("According to Wikipedia")
print(results)
speak(results)
``
Also i'm using python 2.71, and latest of pip.
It is normally because you have named your Python file the same as the module you are importing and caused a circular reference. Try changing the name of your file. It should resolve the issue.
I'm trying to convert a VGG model to coremltools. When I run the following code to convert the model:
with CustomObjectScope({'relu6': keras.layers.ReLU,'DepthwiseConv2D': keras.layers.DepthwiseConv2D}):
from keras.models import load_model
import coremltools
model_directory = 'KerasModels/VGG-7-3-20_13categories.h5'
keras_model = load_model(model_directory)
input_layer = InputLayer(input_shape=(224, 224, 3), name="input_1")
# Save and convert :
keras_model.layers[0] = input_layer
keras_model.save(model_directory)
print("Changed2")
your_model = coremltools.converters.keras.convert(model_directory, input_names=['image'], output_names=['output'], image_input_names='image')
your_model.save('RecycleNet.mlmodel')
I get the following error:
TypeError: 'InputLayer' object is not iterable
How should I go about converting this model to coremltools? Thanks
I fixed this error by switching from using:
coremltools.converters.keras
to:
coremltools.converters.tensorflow
This line of code solved for me:
coremlModel = coremltools.convert(model)
instead of using this:
coremlModel = coremltools.converters.keras.convert(model)
When running the following command I get the error
I am running the code on Databricks Platform, but the code is written using Pandas
TypeError: 'DataFrame' object does not support item assignment
Can someone let me know if the error is related to spark / databricks platform not supporting the code?
import numpy as np
import pandas as pd
def matchSchema(df):
df['active'] = df['active'].astype('boolean')
df['price'] = df['counts']/100
df.drop('counts', axis=1, inplace=True)
return df,df.head(3)
(dataset, sample) = matchSchema(df)
print(dataset)
print(sample)
The error is:
TypeError: 'DataFrame' object does not support item assignment
bool is used instead of boolean as a dtype...
df['active'] = df['active'].astype('bool')
I try following AI Platform tutorial to upload a model and a prediction routine but one part fail and I don't understand why.
My prediction class is the same as in their tutorial:
%%writefile predictor.py
import os
import pickle
import numpy as np
from sklearn.datasets import load_iris
from sklearn.externals import joblib
class MyPredictor(object):
def __init__(self, model, preprocessor):
self._model = model
self._preprocessor = preprocessor
self._class_names = load_iris().target_names
def predict(self, instances, **kwargs):
inputs = np.asarray(instances)
preprocessed_inputs = self._preprocessor.preprocess(inputs)
if kwargs.get('probabilities'):
probabilities = self._model.predict_proba(preprocessed_inputs)
return probabilities.tolist()
else:
outputs = self._model.predict(preprocessed_inputs)
return [self._class_names[class_num] for class_num in outputs]
#classmethod
def from_path(cls, model_dir):
model_path = os.path.join(model_dir, 'model.joblib')
model = joblib.load(model_path)
preprocessor_path = os.path.join(model_dir, 'preprocessor.pkl')
with open(preprocessor_path, 'rb') as f:
preprocessor = pickle.load(f)
return cls(model, preprocessor)
the code I use to create my model in cloud is:
! gcloud beta ai-platform versions create {VERSION_NAME} \
--model {MODEL_NAME} \
--runtime-version 1.13 \
--python-version 3.5 \
--origin gs://{BUCKET_NAME}/custom_prediction_routine_tutorial/model/ \
--package-uris gs://{BUCKET_NAME}/custom_prediction_routine_tutorial/my_custom_code-0.1.tar.gz \
--prediction-class predictor.MyPredictor
But I end up with such an odd error:
ERROR: (gcloud.beta.ai-platform.versions.create) Bad model detected with error: "Failed to load model: Unexpected error when loading the model: 'ascii' codec can't decode byte 0xf9 in position 2: ordinal not in range(128) (Error code: 0)"
The thing is that when I run the same command without the:
--prediction-class predictor.MyPredictor
it work fine.
Does someone know the reason of this ? I think model.joblib might have an encoding problem but when I load it myself there is nothing wrong
I've find the solution,
In the tutorial they use pickle to save the preprocessor object created, and Joblib to save the model.
You need to use Joblib to save both and then send it to google storage.
I have written this code in Matlab to link weka with matlab so that i can implement Genetic Algorithm
enter code here
import java.util.*
import java.util.Enumeration
import java.lang.String
import weka.classifiers.*
import weka.classifiers.Evaluation
import weka.classifiers.trees.J48
import java.io.FileReader
import weka.core.Instances
import weka.core.Utils
import weka.core.Attribute
import java.lang.System
javaaddpath('C:\Users\sagnik\Documents\MATLAB\GA\weka.jar');
clear all
clc
v1 = java.lang.String('-t');
v2 = java.lang.String('C:\Users\sagnik\Documents\MATLAB\GA\generateTrainDiv.csv');
v3 = java.lang.String('-T');
v4 = java.lang.String('C:\Users\sagnik\Documents\MATLAB\GA\generateTestDiv.csv');
prm = cat(1,v1,v2,v3,v4);
classifier = javaObject('weka.classifiers.functions.MultilayerPerceptron');
weka.classifiers.Evaluation.evaluateModel(classifier,prm);
But this is giving error in the last line as:
Java exception occurred:
java.lang.Exception:
Weka exception: Can't open file null.
at weka.classifiers.Evaluation.evaluateModel(Evaluation.java:1080)
Please somenone help me,how to fix this! How is the filename 'null' here..I have already provided the 2 filenames..