create or load ontology in eclipse using owlapi - eclipse

hello everyone i write my ontology in protege and i add owlapi to my eclipse project. i want to add my own ontology to eclipse project with these codes :
import static org.junit.Assert.*;
import static org.semanticweb.owlapi.search.Searcher.annotations;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import javax.annotation.Nonnull;
import org.junit.*;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.TemporaryFolder;
import org.semanticweb.owlapi.apibinding.OWLManager;
import org.semanticweb.owlapi.formats.OWLXMLDocumentFormat;
import org.semanticweb.owlapi.io.StreamDocumentTarget;
import org.semanticweb.owlapi.io.StringDocumentSource;
import org.semanticweb.owlapi.io.StringDocumentTarget;
import org.semanticweb.owlapi.model.AddAxiom;
import org.semanticweb.owlapi.model.AddOntologyAnnotation;
import org.semanticweb.owlapi.model.IRI;
import org.semanticweb.owlapi.model.OWLAnnotation;
import org.semanticweb.owlapi.model.OWLAnnotationProperty;
import org.semanticweb.owlapi.model.OWLAxiom;
import org.semanticweb.owlapi.model.OWLClass;
import org.semanticweb.owlapi.model.OWLClassAssertionAxiom;
import org.semanticweb.owlapi.model.OWLClassExpression;
import org.semanticweb.owlapi.model.OWLDataFactory;
import org.semanticweb.owlapi.model.OWLDataProperty;
import org.semanticweb.owlapi.model.OWLDataPropertyAssertionAxiom;
import org.semanticweb.owlapi.model.OWLDataPropertyRangeAxiom;
import org.semanticweb.owlapi.model.OWLDataRange;
import org.semanticweb.owlapi.model.OWLDatatype;
import org.semanticweb.owlapi.model.OWLDatatypeDefinitionAxiom;
import org.semanticweb.owlapi.model.OWLDatatypeRestriction;
import org.semanticweb.owlapi.model.OWLEntity;
import org.semanticweb.owlapi.model.OWLException;
import org.semanticweb.owlapi.model.OWLIndividual;
import org.semanticweb.owlapi.model.OWLLiteral;
import org.semanticweb.owlapi.model.OWLNamedIndividual;
import org.semanticweb.owlapi.model.OWLObjectProperty;
import org.semanticweb.owlapi.model.OWLObjectPropertyAssertionAxiom;
import org.semanticweb.owlapi.model.OWLObjectPropertyExpression;
import org.semanticweb.owlapi.model.OWLObjectSomeValuesFrom;
import org.semanticweb.owlapi.model.OWLOntology;
import org.semanticweb.owlapi.model.OWLOntologyCreationException;
import org.semanticweb.owlapi.model.OWLOntologyIRIMapper;
import org.semanticweb.owlapi.model.OWLOntologyManager;
import org.semanticweb.owlapi.model.OWLSubClassOfAxiom;
import org.semanticweb.owlapi.model.PrefixManager;
import org.semanticweb.owlapi.model.RemoveAxiom;
import org.semanticweb.owlapi.model.SWRLAtom;
import org.semanticweb.owlapi.model.SWRLClassAtom;
import org.semanticweb.owlapi.model.SWRLObjectPropertyAtom;
import org.semanticweb.owlapi.model.SWRLRule;
import org.semanticweb.owlapi.model.SWRLVariable;
import org.semanticweb.owlapi.profiles.OWL2DLProfile;
import org.semanticweb.owlapi.profiles.OWLProfileReport;
import org.semanticweb.owlapi.profiles.OWLProfileViolation;
import org.semanticweb.owlapi.reasoner.InferenceType;
import org.semanticweb.owlapi.reasoner.Node;
import org.semanticweb.owlapi.reasoner.NodeSet;
import org.semanticweb.owlapi.reasoner.OWLReasoner;
import org.semanticweb.owlapi.reasoner.OWLReasonerConfiguration;
import org.semanticweb.owlapi.reasoner.OWLReasonerFactory;
import org.semanticweb.owlapi.reasoner.ReasonerProgressMonitor;
import org.semanticweb.owlapi.reasoner.SimpleConfiguration;
import org.semanticweb.owlapi.reasoner.structural.StructuralReasonerFactory;
import org.semanticweb.owlapi.util.AutoIRIMapper;
import org.semanticweb.owlapi.util.DefaultPrefixManager;
import org.semanticweb.owlapi.util.InferredAxiomGenerator;
import org.semanticweb.owlapi.util.InferredOntologyGenerator;
import org.semanticweb.owlapi.util.InferredSubClassAxiomGenerator;
import org.semanticweb.owlapi.util.OWLClassExpressionVisitorAdapter;
import org.semanticweb.owlapi.util.OWLEntityRemover;
import org.semanticweb.owlapi.util.OWLOntologyMerger;
import org.semanticweb.owlapi.util.OWLOntologyWalker;
import org.semanticweb.owlapi.util.OWLOntologyWalkerVisitorEx;
import org.semanticweb.owlapi.util.PriorityCollection;
import org.semanticweb.owlapi.util.SimpleIRIMapper;
import org.semanticweb.owlapi.vocab.OWL2Datatype;
import org.semanticweb.owlapi.vocab.OWLFacet;
import org.semanticweb.owlapi.vocab.OWLRDFVocabulary;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import uk.ac.manchester.cs.owlapi.modularity.ModuleType;
import uk.ac.manchester.cs.owlapi.modularity.SyntacticLocalityModuleExtractor;
#SuppressWarnings({ "javadoc", "null" })
public class test {
public static void main(String[] args) {
OWLOntologyManager m = OWLManager.createOWLOntologyManager();
PriorityCollection<OWLOntologyIRIMapper> iriMappers = m.getIRIMappers();
iriMappers.add(new AutoIRIMapper(new File("materializedOntologies"),
true));
OWLOntology o = m.loadOntologyFromOntologyDocument(food);
assertNotNull(o);
}
}
i don't know what's wrong here, also i try to create an ontology if you have sample codes share it. i am new in protege and also owlapi. help please

Loading an ontology with OWL-API:
// load file
File file = new File("Ontology.owl);
// loading the ontology
try {
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLOntology localOntology = manager.loadOntologyFromOntologyDocument(file);
//getting all axioms
Set<OWLAxiom> axSet= localOntology.getAxioms();
} catch (OWLOntologyCreationException e) {
e.printStackTrace();
}

Related

ImportError: cannot import name 'DataOwner' from 'common'

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
#import DataOwner
from common import DataOwner
#from common import LogisticRegressionenter code here
#from common import ModelOwner

Can not resolve import android.support.v4.view.accessibility.AccessibilityNodeInfoCompat;

Can not resolve
import android.support.v4.view.accessibility.AccessibilityNodeInfoCompat;
import android.support.v4.view.accessibility.AccessibilityNodeInfoCompat.AccessibilityActionCompat;
Need Replace
import android.support.v4.view.accessibility.AccessibilityNodeInfoCompat;
import android.support.v4.view.accessibility.AccessibilityNodeInfoCompat.AccessibilityActionCompat;
to
import androidx.core.view.accessibility.AccessibilityNodeInfoCompat;
import androidx.core.view.accessibility.AccessibilityNodeInfoCompat.AccessibilityActionCompat;

Unit test case to mock postgresql Connection and statements in SCALA

I am very much new to Scala and need to write a test case which will mock the Postgresql connections and Statements.However unable to do so and getting the error.Can anyone help me.Below is the code that I've written
Thanks in advance !!
import org.apache.spark.sql.types.{StringType, StructField, StructType}
import org.apache.spark.sql.Column`
import org.slf4j.LoggerFactory
import java.nio.file.Paths
import java.sql.ResultSet
import java.io.InputStream
import java.io.Reader
import java.util
import java.io.File
import java.util.UUID
import java.nio.file.attribute.PosixFilePermission
import com.typesafe.config.ConfigFactory
import org.apache.spark.sql.{DataFrame, SQLContext}
import org.scalatest.{Matchers, WordSpecLike, BeforeAndAfter}
import org.scalactic.{Good, Bad, Many, One}
import scala.collection.JavaConverters._
import spark.jobserver.{SparkJobValid, SparkJobInvalid}
import spark.jobserver.api.{JobEnvironment, SingleProblem}
import org.apache.spark.sql.{Column, Row, DataFrame}
import java.sql.Connection
import java.sql.DriverManager
import java.sql.ResultSet
import org.junit.Assert
import org.junit.Before
import org.junit.Test
import org.junit.runner.RunWith
import org.easymock.EasyMock.expect
import org.powermock.api._
import org.powermock.core.classloader.annotations.PrepareForTest
import java.io.FileReader
import org.scalamock.scalatest.MockFactory
import org.powermock.core.classloader.annotations.PrepareForTest
import org.powermock.api.mockito.PowerMockito
import org.powermock.api.mockito.PowerMockito._
import org.postgresql.copy.CopyManager
import scala.collection.JavaConversions._
import org.mockito.Matchers.any
import java.sql.Statement
class mockCopyManager(){
def copyIn(command : String , fR:java.io.FileReader) :Unit ={
println("Run Command {}".format(command))
}
}
class AdvisoretlSpec extends WordSpecLike with Matchers with
MockFactory {
val sc = SparkUnitTestContext.hiveContext
import SparkUnitTestContext.defaultSizeInBytes
"Class Advisoretl job" should {
"load data in "{
val csvMap : Map[String,String] = Map("t1"->"t1.csv","t2"->"t2.csv")
val testObj = new Advisoretl()
val mockStatement = mock[Statement]
val mockConnection=mock[Connection]
val a:String = "TRUNCATE TABLE t1"
val b:String = "TRUNCATE TABLE t2"
PowerMockito.mockStatic(classOf[DriverManager])
val mockCopyManager=mock[CopyManager]
PowerMockito.when(DriverManager.getConnection(any[String]), Nil: _*).thenReturn(mockConnection)
(mockConnection.createStatement _).when().returns(mockStatement)
(mockStatement.executeUpdate _).when(a).returns(1)
(mockStatement.executeUpdate _).when("TRUNCATE TABLE t2").returns(1)
(mockCopyManager.copyIn _).when(*).returns(1)*/
val fnResult = testObj.connectionWithPostgres("a", "b", "c", "target/testdata", csvMap)
fnResult should be ("OK")
}
}
}'

Why recommendProductsForUsers is not a member of org.apache.spark.mllib.recommendation.MatrixFactorizationModel

i have build recommendations system using Spark with ALS collaboratife filtering mllib
my snippet code :
bestModel.get
.predict(toBePredictedBroadcasted.value)
evrything is ok, but i need change code for fullfilment requirement, i read from scala doc in here
i need to use def recommendProducts
but when i tried in my code :
bestModel.get.recommendProductsForUsers(100)
and error when compile :
value recommendProductsForUsers is not a member of org.apache.spark.mllib.recommendation.MatrixFactorizationModel
[error] bestModel.get.recommendProductsForUsers(100)
maybe anyone can help me
thx
NB : i use Spark 1.5.0
my import :
import com.datastax.spark.connector._
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.SparkContext._
import java.io.File
import scala.io.Source
import org.apache.log4j.Logger
import org.apache.log4j.Level
import org.apache.spark.rdd._
import org.apache.spark.mllib.recommendation.{ALS, Rating, MatrixFactorizationModel}
import org.apache.spark.sql.SQLContext
import org.apache.spark.broadcast.Broadcast

How can I load Avros in Spark using the schema on-board the Avro file(s)?

I am running CDH 4.4 with Spark 0.9.0 from a Cloudera parcel.
I have a bunch of Avro files that were created via Pig's AvroStorage UDF. I want to load these files in Spark, using a generic record or the schema onboard the Avro files. So far I've tried this:
import org.apache.avro.mapred.AvroKey
import org.apache.avro.mapreduce.AvroKeyInputFormat
import org.apache.hadoop.io.NullWritable
import org.apache.commons.lang.StringEscapeUtils.escapeCsv
import org.apache.hadoop.fs.Path
import org.apache.hadoop.fs.FileSystem
import org.apache.hadoop.conf.Configuration
import java.net.URI
import java.io.BufferedInputStream
import java.io.File
import org.apache.avro.generic.{GenericDatumReader, GenericRecord}
import org.apache.avro.specific.SpecificDatumReader
import org.apache.avro.file.DataFileStream
import org.apache.avro.io.DatumReader
import org.apache.avro.file.DataFileReader
import org.apache.avro.mapred.FsInput
val input = "hdfs://hivecluster2/securityx/web_proxy_mef/2014/05/29/22/part-m-00016.avro"
val inURI = new URI(input)
val inPath = new Path(inURI)
val fsInput = new FsInput(inPath, sc.hadoopConfiguration)
val reader = new GenericDatumReader[GenericRecord]
val dataFileReader = DataFileReader.openReader(fsInput, reader)
val schemaString = dataFileReader.getSchema
val buf = scala.collection.mutable.ListBuffer.empty[GenericRecord]
while(dataFileReader.hasNext) {
buf += dataFileReader.next
}
sc.parallelize(buf)
This works for one file, but it can't scale - I am loading all the data into local RAM and then distributing it across the spark nodes from there.
To answer my own question:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.avro.generic.GenericRecord
import org.apache.avro.mapred.AvroKey
import org.apache.avro.mapred.AvroInputFormat
import org.apache.avro.mapreduce.AvroKeyInputFormat
import org.apache.hadoop.io.NullWritable
import org.apache.commons.lang.StringEscapeUtils.escapeCsv
import org.apache.hadoop.fs.FileSystem
import org.apache.hadoop.fs.Path
import org.apache.hadoop.conf.Configuration
import java.io.BufferedInputStream
import org.apache.avro.file.DataFileStream
import org.apache.avro.io.DatumReader
import org.apache.avro.file.DataFileReader
import org.apache.avro.file.DataFileReader
import org.apache.avro.generic.{GenericDatumReader, GenericRecord}
import org.apache.avro.mapred.FsInput
import org.apache.avro.Schema
import org.apache.avro.Schema.Parser
import org.apache.hadoop.mapred.JobConf
import java.io.File
import java.net.URI
// spark-shell -usejavacp -classpath "*.jar"
val input = "hdfs://hivecluster2/securityx/web_proxy_mef/2014/05/29/22/part-m-00016.avro"
val jobConf= new JobConf(sc.hadoopConfiguration)
val rdd = sc.hadoopFile(
input,
classOf[org.apache.avro.mapred.AvroInputFormat[GenericRecord]],
classOf[org.apache.avro.mapred.AvroWrapper[GenericRecord]],
classOf[org.apache.hadoop.io.NullWritable],
10)
val f1 = rdd.first
val a = f1._1.datum
a.get("rawLog") // Access avro fields
This works for me:
import org.apache.avro.generic.GenericRecord
import org.apache.avro.mapred.{AvroInputFormat, AvroWrapper}
import org.apache.hadoop.io.NullWritable
...
val path = "hdfs:///path/to/your/avro/folder"
val avroRDD = sc.hadoopFile[AvroWrapper[GenericRecord], NullWritable, AvroInputFormat[GenericRecord]](path)