I have written this code in Matlab to link weka with matlab so that i can implement Genetic Algorithm
enter code here
import java.util.*
import java.util.Enumeration
import java.lang.String
import weka.classifiers.*
import weka.classifiers.Evaluation
import weka.classifiers.trees.J48
import java.io.FileReader
import weka.core.Instances
import weka.core.Utils
import weka.core.Attribute
import java.lang.System
javaaddpath('C:\Users\sagnik\Documents\MATLAB\GA\weka.jar');
clear all
clc
v1 = java.lang.String('-t');
v2 = java.lang.String('C:\Users\sagnik\Documents\MATLAB\GA\generateTrainDiv.csv');
v3 = java.lang.String('-T');
v4 = java.lang.String('C:\Users\sagnik\Documents\MATLAB\GA\generateTestDiv.csv');
prm = cat(1,v1,v2,v3,v4);
classifier = javaObject('weka.classifiers.functions.MultilayerPerceptron');
weka.classifiers.Evaluation.evaluateModel(classifier,prm);
But this is giving error in the last line as:
Java exception occurred:
java.lang.Exception:
Weka exception: Can't open file null.
at weka.classifiers.Evaluation.evaluateModel(Evaluation.java:1080)
Please somenone help me,how to fix this! How is the filename 'null' here..I have already provided the 2 filenames..
Related
When running the following command I get the error
I am running the code on Databricks Platform, but the code is written using Pandas
TypeError: 'DataFrame' object does not support item assignment
Can someone let me know if the error is related to spark / databricks platform not supporting the code?
import numpy as np
import pandas as pd
def matchSchema(df):
df['active'] = df['active'].astype('boolean')
df['price'] = df['counts']/100
df.drop('counts', axis=1, inplace=True)
return df,df.head(3)
(dataset, sample) = matchSchema(df)
print(dataset)
print(sample)
The error is:
TypeError: 'DataFrame' object does not support item assignment
bool is used instead of boolean as a dtype...
df['active'] = df['active'].astype('bool')
I try following AI Platform tutorial to upload a model and a prediction routine but one part fail and I don't understand why.
My prediction class is the same as in their tutorial:
%%writefile predictor.py
import os
import pickle
import numpy as np
from sklearn.datasets import load_iris
from sklearn.externals import joblib
class MyPredictor(object):
def __init__(self, model, preprocessor):
self._model = model
self._preprocessor = preprocessor
self._class_names = load_iris().target_names
def predict(self, instances, **kwargs):
inputs = np.asarray(instances)
preprocessed_inputs = self._preprocessor.preprocess(inputs)
if kwargs.get('probabilities'):
probabilities = self._model.predict_proba(preprocessed_inputs)
return probabilities.tolist()
else:
outputs = self._model.predict(preprocessed_inputs)
return [self._class_names[class_num] for class_num in outputs]
#classmethod
def from_path(cls, model_dir):
model_path = os.path.join(model_dir, 'model.joblib')
model = joblib.load(model_path)
preprocessor_path = os.path.join(model_dir, 'preprocessor.pkl')
with open(preprocessor_path, 'rb') as f:
preprocessor = pickle.load(f)
return cls(model, preprocessor)
the code I use to create my model in cloud is:
! gcloud beta ai-platform versions create {VERSION_NAME} \
--model {MODEL_NAME} \
--runtime-version 1.13 \
--python-version 3.5 \
--origin gs://{BUCKET_NAME}/custom_prediction_routine_tutorial/model/ \
--package-uris gs://{BUCKET_NAME}/custom_prediction_routine_tutorial/my_custom_code-0.1.tar.gz \
--prediction-class predictor.MyPredictor
But I end up with such an odd error:
ERROR: (gcloud.beta.ai-platform.versions.create) Bad model detected with error: "Failed to load model: Unexpected error when loading the model: 'ascii' codec can't decode byte 0xf9 in position 2: ordinal not in range(128) (Error code: 0)"
The thing is that when I run the same command without the:
--prediction-class predictor.MyPredictor
it work fine.
Does someone know the reason of this ? I think model.joblib might have an encoding problem but when I load it myself there is nothing wrong
I've find the solution,
In the tutorial they use pickle to save the preprocessor object created, and Joblib to save the model.
You need to use Joblib to save both and then send it to google storage.
Could you anyone please findout the rootcause of the below error
import org.apache.poi.ss.usermodel.*;
import org.apache.poi.hssf.usermodel.*;
import org.apache.poi.xssf.usermodel.*;
import org.apache.poi.ss.util.*;
import org.apache.poi.ss.usermodel.*;
import java.io.*;
//def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context)
File file=new File("C://Users/toothless/Desktop/Don Delete/MyPractice.xlsx")
Workbook workbook = Workbook.getWorkbook(file)
Sheet sheet=workbook.getSheet(0)
rc=sheet.getRows()
log.info rc
Following is my ext folder screenshot.
I'm getting the following error while executing the above groovy code.
groovy.lang.MissingMethodException: No signature of method: static org.apache.poi.ss.usermodel.Workbook.getWorkbook() is applicable for argument types: (java.io.File) values: [C:\Users\toothless\Desktop\Don Delete\MyPractice.xlsx] error at line: 10
First there is no method like you mention Workbook.getWorkbook. Ref doc. Here your question to read excel file as workbook object use the code shown below,
For xlsx files :
XSSFWorkbook wb = new XSSFWorkbook (file)
For xls files :
HSSFWorkbook wb = new HSSFWorkbook (file)
After that you can use these methods shown in doc for further reading process.
Hi I'm trying to use Brill Tagger to tag a set of sentences. But when running the following ,
=======================
Training a Brill Tagger
>>> default_tagger = DefaultTagger('NN')
>>> initial_tagger = backoff_tagger(train_sents, [UnigramTagger, BigramTagger, TrigramTagger], backoff=default_tagger)
>>> initial_tagger.evaluate(test_sents)
0.8806820634578028
>>> from tag_util import train_brill_tagger
>>> brill_tagger = train_brill_tagger(initial_tagger, train_sents)
>>> brill_tagger.evaluate(test_sents)
0.8827541549751781
I'm getting the following error.
NameError: name 'backoff_tagger' is not defined
what are the causes for this. Do I need to import something
I think you need to import the following:
from tag_util import backoff_tagger
Does anyone have an example on how to properly integrate banana-rdf into a project?
Based on the example on how to use a SPARQL engine, I have tried to set up something for my project, but I get an error that I don't know how to resolve.
import java.net.URL
import org.w3.banana.jena.JenaModule
import org.w3.banana.{SparqlHttpModule, SparqlOpsModule, RDFOpsModule, RDFModule}
object SparqlService extends RDFModule with RDFOpsModule with SparqlOpsModule
with SparqlHttpModule with JenaModule
import SparqlService._
import SparqlService.sparqlOps
import SparqlService.sparqlOps._
import SparqlService.sparqlHttp.sparqlEngineSyntax._
import SparqlService.ops._
val endpoint = new URL("http://dbpedia.org/sparql/")
val query = parseSelect("""
PREFIX ont: <http://dbpedia.org/ontology/>
SELECT DISTINCT ?language WHERE {
?language a ont:ProgrammingLanguage .
?language ont:influencedBy ?other .
?other ont:influencedBy ?language .
} LIMIT 100
""").get
val answers: Rdf#Solutions = endpoint.executeSelect(query).get
val languages: Iterator[Rdf#URI] = answers.iterator map { row =>
row("language").get.as[Rdf#URI].get
}
println(languages.to[List])
Unfortunately, I get the following error and I don't get why.
Error:(27, 26) could not find implicit value for parameter fromPG:
org.w3.banana.binder.FromPG[org.w3.banana.jena.Jena,com.hp.hpl.jena.graph.Node_URI]
row("language").get.as[Rdf#URI].get
Any idea?