Unable to create Hbase table from windows eclipse - eclipse

I am trying to create Hbase table from eclipse installed in windows. I have cloudera vm running .I have out the ip "192.168.1.5" in windows host file and vm host files. Please suggest.
I have included all the hbase jar files. Can you please guide me how to connect eclipse to cloudera vm. The job is not throwing any error, but running for long time.
package hbase;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.util.Bytes;
public class HbaseTable {
public static void main(String[] args) throws Exception {
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "192.168.1.5");
conf.set("hbase.zookeeper.property.clientPort","2181");
conf.set("zookeeper.znode.parent", "/hbase-unsecure");
//HTable usersTable = new HTable(conf, "users");
//HBaseAdmin hbase = new HBaseAdmin(conf);
HBaseAdmin admin = new HBaseAdmin(conf);
// Instantiating table descriptor class
HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf("emp"));
// Adding column families to table descriptor
tableDescriptor.addFamily(new HColumnDescriptor("personal"));
tableDescriptor.addFamily(new HColumnDescriptor("professional"));
**System.out.println(" Table created started ");**
// Execute the table through admin
admin.createTable(tableDescriptor);
System.out.println(" Table created ");
}
}
The output after execution:
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Table created started

add the ip address of your cloudera vm 192.168.1.5 to /etc/hosts of your windows under
C:\WINDOWS\system32\drivers\etc\hosts
and add this property to the configuration:
conf.set("hbase.master", "192.168.1.5:60000");

Related

PackageDescr incompatibility - Drools migration from 5.5.0 to 6.2.0

We are migrating Drools from 5.5.0 to 6.2.0. I am not able to figure compatible usage of org.drools.compiler.lang.descr.PackageDescr which is implementing org.kie.internal.definition.KnowledgeDescr. We get an instance of PackageDescr from DrlParser.parse and then creating a org.drools.io.Resource using org.drools.io.ResourceFactory.newDescrResource(KnowledgeDescr) and argument here is of type org.drools.definition.KnowledgeDescr which is not the same KnowledgeDescr that PackageDescr implemented above. We are not switching to KIE API as of now but using Knowledge Legacy Adapter only. What is the replacement for this issue to be fixed?
Usage example:
import org.drools.builder.KnowledgeBuilder;
import org.drools.builder.KnowledgeBuilderConfiguration;
import org.drools.builder.KnowledgeBuilderFactory;
import org.drools.builder.ResourceType;
import org.drools.io.ResourceFactory;
import org.drools.compiler.lang.descr.PackageDescr;
KnowledgeBuilderConfiguration builderConf = ...
PackageDescr pkg = ...
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(builderConf);
// Compilation error in this line
kbuilder.add(ResourceFactory.newDescrResource(pkg), ResourceType.DESCR);

How to import a custom AWS rule

I'm trying to import a rule that looks like this in AWS:
enter image description here
I tried a command like this but it fails:
terraform import aws_security_group_rule.ke_ingress_icmp sg-0af205c29967e5084_ingress_icmp_-1_-1_0.0.0.0/0
aws_security_group_rule.ke_ingress_icmp: Importing from ID "sg-0af205c29967e5084_ingress_icmp_-1_-1_0.0.0.0/0"...
aws_security_group_rule.ke_ingress_icmp: Import prepared!
Prepared aws_security_group_rule for import
aws_security_group_rule.ke_ingress_icmp: Refreshing state... [id=sg-0af205c29967e5084_ingress_icmp_-1_-1_0.0.0.0/0]
Error: Cannot import non-existent remote object
I'm not sure how to represent protocol="Destination Unreachable" and port range="fragmentation required, and DF flag set" in the import command. Does anyone have any ideas?

Scala Notebook Type Mismatch error when creating Event Hub for streaming tweets

I want to send messages from a Twitter application to an Azure event hub. However, I am getting an error that says:
notebook:20: error: type mismatch;
found : java.util.concurrent.ExecutorService
required: java.util.concurrent.ScheduledExecutorService
val eventHubClient = EventHubClient.create(connStr.toString(), pool)
I do not know how to create the EventHubClient.create now. Please help.
I am referring to code from the link
https://learn.microsoft.com/en-us/azure/azure-databricks/databricks-stream-from-eventhubs.
Also, I have tried the solution from link:
Stream data into Azure Databricks using Event Hubs and it doesn't work for me.
The version of the cluster is 5.2 (includes Apache Spark 2.4.0, Scala 2.11) which should include the Java SE 8 libraries that have the new ScheduledExecutorService member. Also, the libraries attached are com.microsoft.azure:azure-eventhubs-spark_2.11:2.3.9 and org.twitter4j:twitter4j-core:4.0.7, so again all the prerequisites are met.
The code is:
import java._
import java.util._
import scala.collection.JavaConverters._
import com.microsoft.azure.eventhubs._
import java.util.concurrent._
import java.util.concurrent.ExecutorService
import java.util.concurrent.ScheduledExecutorService
val pool = Executors.newFixedThreadPool(1)
val eventHubClient = EventHubClient.create(connStr.toString(), pool)

Filesystem provider disappearing in Spark?

Mysteriously, when I use Spark my custom filesystem provider vanishes.
The full source for my example is available on github so you can follow.
I'm using maven to depend on gcloud-java-nio, which provides a Java FileSystem for Google Cloud Storage, via "gs://" URLs. My Spark project uses maven-shade-plugin to create one big jar with all the source in it.
The big jar correctly includes a META-INF/services/java.nio.file.spi.FileSystemProvider file, containing the correct name for the class (com.google.cloud.storage.contrib.nio.CloudStorageFileSystemProvider). I checked and that class is also correctly included in the jar file.
The program uses FileSystemProvider.installedProviders() to list the filesystem providers it finds. "gs" should be listed (and it is if I run the same function in a non-Spark context), but when running with Spark on Dataproc, that provider's gone.
I'd like to know: How can I use a custom filesystem in my Spark program?
edit: Dennis Huo helpfully contributed that he sees the same problem when running on a Spark cluster, so the problem isn't specific to Dataproc. In fact, it also occurs when just using Scala. Also there are workarounds for the example I'm showing here, but I'd still like to know how to use a custom filesystem with Spark.
This doesn't appear to be a Dataproc-specific issue, more of a Scala issue and Spark fundamentally depends on Scala; if I build your jarfile and then load it up with scala independently of Dataproc or Spark, I get:
scala -cp spark-repro-1.0-SNAPSHOT.jar
scala> import java.nio.file.spi.FileSystemProvider
import java.nio.file.spi.FileSystemProvider
scala> import scala.collection.JavaConversions._
import scala.collection.JavaConversions._
scala> FileSystemProvider.installedProviders().toList.foreach(l => println(l.getScheme()))
file
jar
scala> SparkRepro.listFS(1)
res3: String = Worker 1 installed filesystem providers: file jar
So it seems any bundling that's being done isn't properly registering the FileSystem provider, at least for scala. I tested the theory using the ListFilesystems example code (just removed the package at the top for convenience) on both a Dataproc node as well as a manually created VM with scala and Java 7 installed independently of Dataproc just to double-check.
$ cat ListFilesystems.java
import java.io.IOException;
import java.net.URI;
import java.nio.file.FileSystem;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.spi.FileSystemProvider;
/**
* ListFilesystems is a super-simple program that lists the available NIO filesystems.
*/
public class ListFilesystems {
/**
* See the class documentation.
*/
public static void main(String[] args) throws IOException {
listFilesystems();
}
private static void listFilesystems() {
System.out.println("Installed filesystem providers:");
for (FileSystemProvider p : FileSystemProvider.installedProviders()) {
System.out.println(" " + p.getScheme());
}
}
}
$ javac ListFilesystems.java
Running using java and then scala:
$ java -cp spark-repro-1.0-SNAPSHOT.jar:. ListFilesystems
Installed filesystem providers:
file
jar
gs
$ scala -cp spark-repro-1.0-SNAPSHOT.jar:. ListFilesystems
Installed filesystem providers:
file
jar
$
This was the same on both Dataproc and my non-Dataproc VM. It looks like there's still unresolved difficulties getting the FileSystemProviders to load properly in Scala, and there doesn't seem to be an easy way to dynamically register them system-wide at runtime either; the most I could find was this old thread that didn't seem to come to any useful conclusion.
Fortunately though, it looks like at least the CloudStorageFileSystemProvider has no problem making it onto the classpath, so you can at least fall back to explicitly creating an instance of the cloud storage provider to use:
new com.google.cloud.storage.contrib.nio.CloudStorageFileSystemProvider()
.getFileSystem(new java.net.URI("gs://my-bucket"))
Alternatively, if you're using Spark anyways, you might want to consider just using the Hadoop FileSystem interfaces. It's very similar to Java NIO FileSystems (rather, predated the Java NIO FileSystem stuff), and it's more portable for now. You can easily do things like:
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.conf.Configuration;
...
Path foo = new Path("gs://my-bucket/my-data.txt");
InputStream is = foo.getFileSystem(new Configuration()).open(foo);
...
The benefit of working with the Hadoop FileSystem interfaces is that you're guaranteed the configuration/settings will be clean both in your driver program and in the distributed worker nodes. For example, sometimes you'll need to modify filesystem settings just for a single job running in a Dataproc cluster; then you can plumb through Hadoop properties which are properly scoped for a single job without interfering with other jobs running at the same time.
My comment on the linked ticket (https://github.com/scala/bug/issues/10247):
Using scala -toolcp path makes your app jar available to the system class loader.
Or, use the API where you can provide the class loader to find providers which are not "installed."
The scala runner script has a few interacting parts, like -Dscala.usejavacp and -nobootcp, which possibly behave differently on Windows. It's not always obvious which incantation to use.
The misunderstanding here is the assumption that java and scala do the same thing with respect to -cp.
This example shows loading a test provider from a build dir. The "default" provider comes first, hence the strange ordering.
$ skala -toolcp ~/bin
Welcome to Scala 2.12.2 (OpenJDK 64-Bit Server VM 1.8.0_112)
scala> import java.nio.file.spi.FileSystemProvider
import java.nio.file.spi.FileSystemProvider
scala> FileSystemProvider.installedProviders
res0: java.util.List[java.nio.file.spi.FileSystemProvider] = [sun.nio.fs.LinuxFileSystemProvider#12abdfb, com.acme.FlakeyFileSystemProvider#b0e5507, com.acme.FlakeyTPDFileSystemProvider#6bbe50c9, com.sun.nio.zipfs.ZipFileSystemProvider#3c46dcbe]
scala> :quit
Or specifying loader:
$ skala -cp ~/bin
Welcome to Scala 2.12.2 (OpenJDK 64-Bit Server VM 1.8.0_112)
scala> import java.net.URI
import java.net.URI
scala> val uri = URI.create("tpd:///?count=10000")
uri: java.net.URI = tpd:///?count=10000
scala> import collection.JavaConverters._
import collection.JavaConverters._
scala> val em = Map.empty[String, AnyRef].asJava
em: java.util.Map[String,AnyRef] = {}
scala> import java.nio.file.FileSystems
import java.nio.file.FileSystems
scala> FileSystems.
getDefault getFileSystem newFileSystem
scala> FileSystems.newFileSystem(uri, em, $intp.classLoader)
res1: java.nio.file.FileSystem = com.acme.FlakeyFileSystemProvider$FlakeyFileSystem#2553dcc0

scala cloud foundry mongodb : access to mongodb denied

I have installed eclipse, the cloudfoundry plugin, the scala plugin,the vaadin plugin(for web developments) and the mongodb libraries.
I created a class like this :
import vaadin.scala.Application
import vaadin.scala.VerticalLayout
import com.mongodb.casbah.MongoConnection
import com.mongodb.casbah.commons.MongoDBObject
import vaadin.scala.Label
import vaadin.scala.Button
class Launcher extends Application {
val label=new Label
override def main = new VerticalLayout() {
val coll=MongoConnection()("mybd")("somecollection")
val builder=MongoDBObject.newBuilder
builder+="foo1" -> "bar"
var newobj=builder.result()
coll.save(newobj)
val mongoColl=MongoConnection()("mybd")("somecollection")
val withFoo=mongoColl.findOne()
label.value=withFoo
add(label)
//bouton pour faire joli
add(new Button{
caption_=("click me!")
})
}
}
the error (the access to the mongodb database is denied) comes from the parameters, which are the default ones.
do you know how to set up the good parameters in scala or in java?
Looks like you got some help on the vcap-dev mailing list
package com.example.vaadin_1
import vaadin.scala.Application
import org.cloudfoundry.runtime.env.CloudEnvironment
import org.cloudfoundry.runtime.env.MongoServiceInfo
import com.mongodb.casbah.MongoConnection
class Launcher extends Application {
val cloudEnvironment = new CloudEnvironment()
val mongoServices = cloudEnvironment.getServiceInfos(classOf[MongoServiceInfo])
val mongo = mongoServices.get(0)
val mongodb = MongoConnection(mongo.getHost(), mongo.getPort())("abc")
mongodb.authenticate(mongo.getUserName(),mongo.getPassword())
}
I would suggest to do it using Spring Data for MongoDB there is a sample application for Cloudfoundry in particular put together by the Spring guys. With a bit of xml configuration you have ready to inject the mongoTemplate similar to the familiar Spring xxxTemplate paradigm.
when deploying to CloudFoundry, the information relative to connecting to a service (i.e mongo in your case) is made available to the app through environment variable VCAP_SERVICES. It is a json document with one entry per service. You could of course parse it yourself, but you will find the class http://cf-runtime-api.cloudfoundry.com/org/cloudfoundry/runtime/env/CloudEnvironment.html useful. You will need to add the org.cloudfoundry/cloudfoundry-runtime/0.8.1 jar to your project. You can use it without Spring.
Have a look at http://docs.cloudfoundry.com/services.html for explanation of the VCAP_SERVICES underlying var