I need to write an HTTP client that will periodically download (and dump on disk) files that are much larger than the available memory.
What's the most appropriate strategies and HTTP client libraries for this task?
A plus for libs without bulky dependencies like Akka.
I found a reasonable solution that does not require adding any external dependencies. Only Scala/Java standard libraries.
import sys.process._
import java.net.URL
import java.io.File
new URL("http://download.thinkbroadband.com/1GB.zip") #> new File("/tmp/1gb.zip") !!
Bonus: add some headers and conditional get to the request
import sys.process._
import java.net.URL
import java.io.File
val url = new URL("http://download.thinkbroadband.com/1GB.zip")
val conn = url.openConnection
conn.setRequestProperty("Accept","text/json")
conn.setIfModifiedSince(new Date().getTime - 1000*60*30)
url #> new File("/tmp/1gb.zip") !!
Related
I am getting below error while using HttpClient. Can you let me know how to use HttpClient exactly. I am new with elastic4s.
I want to connect scala with ssl configured elasticsearch. I also want to know how I can pass SSL details with link such as keystore path, truststore path and user name , password.
scala> import com.sksamuel.elastic4s.http.{HttpClient, HttpResponse}
import com.sksamuel.elastic4s.http.{HttpClient, HttpResponse}
scala> import com.sksamuel.elastic4s.http.ElasticDsl._
import com.sksamuel.elastic4s.http.ElasticDsl._
scala> val client = HttpClient(ElasticsearchClientUri(uri))
<console>:39: error: not found: value HttpClient
val client = HttpClient(ElasticsearchClientUri(uri))
HttpClient appears to be a trait in the codebase.You seem to be using the same as an object. You can check the implementation Here. For your use case i think the better approach would be to use ElasticClient. Code would look something like this
import com.sksamuel.elastic4s.http._
import com.sksamuel.elastic4s.{ElasticClient, ElasticDsl, ElasticsearchClientUri}
val client = elastic4s.ElasticClient(ElasticsearchClientUri(uri))
I got the same problem, i.e. in my setup I got errors (not found) when trying to use HttpClient (elastic4s-core,elastic4s-http-streams and elastic4s-client-esjava version 7.3.1 on scala 2.12.10).
The solution: you should be able to find and use JavaClient, an implementation of HttpClient that wraps the Elasticsearch Java Rest Client.
An example of how to use the JavaClient can be found here.
Thus, your code should look like the following:
import com.sksamuel.elastic4s.http.JavaClient
import com.sksamuel.elastic4s.{ElasticClient, ElasticDsl, ElasticProperties}
...
val client = ElasticClient(JavaClient(ElasticProperties(uri)))
I want to send messages from a Twitter application to an Azure event hub. However, I am getting an error that says:
notebook:20: error: type mismatch;
found : java.util.concurrent.ExecutorService
required: java.util.concurrent.ScheduledExecutorService
val eventHubClient = EventHubClient.create(connStr.toString(), pool)
I do not know how to create the EventHubClient.create now. Please help.
I am referring to code from the link
https://learn.microsoft.com/en-us/azure/azure-databricks/databricks-stream-from-eventhubs.
Also, I have tried the solution from link:
Stream data into Azure Databricks using Event Hubs and it doesn't work for me.
The version of the cluster is 5.2 (includes Apache Spark 2.4.0, Scala 2.11) which should include the Java SE 8 libraries that have the new ScheduledExecutorService member. Also, the libraries attached are com.microsoft.azure:azure-eventhubs-spark_2.11:2.3.9 and org.twitter4j:twitter4j-core:4.0.7, so again all the prerequisites are met.
The code is:
import java._
import java.util._
import scala.collection.JavaConverters._
import com.microsoft.azure.eventhubs._
import java.util.concurrent._
import java.util.concurrent.ExecutorService
import java.util.concurrent.ScheduledExecutorService
val pool = Executors.newFixedThreadPool(1)
val eventHubClient = EventHubClient.create(connStr.toString(), pool)
Using sbt package I have the following error
Spark Scala error while loading BytesWritable, invalid LOC header (bad signature)
My code is
....
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
......
object Test{
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Test")
val sc = new SparkContext(conf) // the error is due by this
......
}
}
Pls re-load your JARs and / or library dependencies as they might be corrupted while building jar through sbt - could be issue with one of their update. Second alternative is that you have too many temp files open, check your 4040-9 ports on master if there are any jobs hanging and kill them if so, you can also check how increase open files you have on linux:/etc/security/limits.conf where hard nofile ***** and soft nofile ***** then reboot and ulimit -n ****
I was using spark-mllib_2.11 and it gave me the same error. I had to use version 2.10 of Spark MLIB to get rid of it.
Using Maven:
<artifactId>spark-mllib_2.10</artifactId>
Mysteriously, when I use Spark my custom filesystem provider vanishes.
The full source for my example is available on github so you can follow.
I'm using maven to depend on gcloud-java-nio, which provides a Java FileSystem for Google Cloud Storage, via "gs://" URLs. My Spark project uses maven-shade-plugin to create one big jar with all the source in it.
The big jar correctly includes a META-INF/services/java.nio.file.spi.FileSystemProvider file, containing the correct name for the class (com.google.cloud.storage.contrib.nio.CloudStorageFileSystemProvider). I checked and that class is also correctly included in the jar file.
The program uses FileSystemProvider.installedProviders() to list the filesystem providers it finds. "gs" should be listed (and it is if I run the same function in a non-Spark context), but when running with Spark on Dataproc, that provider's gone.
I'd like to know: How can I use a custom filesystem in my Spark program?
edit: Dennis Huo helpfully contributed that he sees the same problem when running on a Spark cluster, so the problem isn't specific to Dataproc. In fact, it also occurs when just using Scala. Also there are workarounds for the example I'm showing here, but I'd still like to know how to use a custom filesystem with Spark.
This doesn't appear to be a Dataproc-specific issue, more of a Scala issue and Spark fundamentally depends on Scala; if I build your jarfile and then load it up with scala independently of Dataproc or Spark, I get:
scala -cp spark-repro-1.0-SNAPSHOT.jar
scala> import java.nio.file.spi.FileSystemProvider
import java.nio.file.spi.FileSystemProvider
scala> import scala.collection.JavaConversions._
import scala.collection.JavaConversions._
scala> FileSystemProvider.installedProviders().toList.foreach(l => println(l.getScheme()))
file
jar
scala> SparkRepro.listFS(1)
res3: String = Worker 1 installed filesystem providers: file jar
So it seems any bundling that's being done isn't properly registering the FileSystem provider, at least for scala. I tested the theory using the ListFilesystems example code (just removed the package at the top for convenience) on both a Dataproc node as well as a manually created VM with scala and Java 7 installed independently of Dataproc just to double-check.
$ cat ListFilesystems.java
import java.io.IOException;
import java.net.URI;
import java.nio.file.FileSystem;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.spi.FileSystemProvider;
/**
* ListFilesystems is a super-simple program that lists the available NIO filesystems.
*/
public class ListFilesystems {
/**
* See the class documentation.
*/
public static void main(String[] args) throws IOException {
listFilesystems();
}
private static void listFilesystems() {
System.out.println("Installed filesystem providers:");
for (FileSystemProvider p : FileSystemProvider.installedProviders()) {
System.out.println(" " + p.getScheme());
}
}
}
$ javac ListFilesystems.java
Running using java and then scala:
$ java -cp spark-repro-1.0-SNAPSHOT.jar:. ListFilesystems
Installed filesystem providers:
file
jar
gs
$ scala -cp spark-repro-1.0-SNAPSHOT.jar:. ListFilesystems
Installed filesystem providers:
file
jar
$
This was the same on both Dataproc and my non-Dataproc VM. It looks like there's still unresolved difficulties getting the FileSystemProviders to load properly in Scala, and there doesn't seem to be an easy way to dynamically register them system-wide at runtime either; the most I could find was this old thread that didn't seem to come to any useful conclusion.
Fortunately though, it looks like at least the CloudStorageFileSystemProvider has no problem making it onto the classpath, so you can at least fall back to explicitly creating an instance of the cloud storage provider to use:
new com.google.cloud.storage.contrib.nio.CloudStorageFileSystemProvider()
.getFileSystem(new java.net.URI("gs://my-bucket"))
Alternatively, if you're using Spark anyways, you might want to consider just using the Hadoop FileSystem interfaces. It's very similar to Java NIO FileSystems (rather, predated the Java NIO FileSystem stuff), and it's more portable for now. You can easily do things like:
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.conf.Configuration;
...
Path foo = new Path("gs://my-bucket/my-data.txt");
InputStream is = foo.getFileSystem(new Configuration()).open(foo);
...
The benefit of working with the Hadoop FileSystem interfaces is that you're guaranteed the configuration/settings will be clean both in your driver program and in the distributed worker nodes. For example, sometimes you'll need to modify filesystem settings just for a single job running in a Dataproc cluster; then you can plumb through Hadoop properties which are properly scoped for a single job without interfering with other jobs running at the same time.
My comment on the linked ticket (https://github.com/scala/bug/issues/10247):
Using scala -toolcp path makes your app jar available to the system class loader.
Or, use the API where you can provide the class loader to find providers which are not "installed."
The scala runner script has a few interacting parts, like -Dscala.usejavacp and -nobootcp, which possibly behave differently on Windows. It's not always obvious which incantation to use.
The misunderstanding here is the assumption that java and scala do the same thing with respect to -cp.
This example shows loading a test provider from a build dir. The "default" provider comes first, hence the strange ordering.
$ skala -toolcp ~/bin
Welcome to Scala 2.12.2 (OpenJDK 64-Bit Server VM 1.8.0_112)
scala> import java.nio.file.spi.FileSystemProvider
import java.nio.file.spi.FileSystemProvider
scala> FileSystemProvider.installedProviders
res0: java.util.List[java.nio.file.spi.FileSystemProvider] = [sun.nio.fs.LinuxFileSystemProvider#12abdfb, com.acme.FlakeyFileSystemProvider#b0e5507, com.acme.FlakeyTPDFileSystemProvider#6bbe50c9, com.sun.nio.zipfs.ZipFileSystemProvider#3c46dcbe]
scala> :quit
Or specifying loader:
$ skala -cp ~/bin
Welcome to Scala 2.12.2 (OpenJDK 64-Bit Server VM 1.8.0_112)
scala> import java.net.URI
import java.net.URI
scala> val uri = URI.create("tpd:///?count=10000")
uri: java.net.URI = tpd:///?count=10000
scala> import collection.JavaConverters._
import collection.JavaConverters._
scala> val em = Map.empty[String, AnyRef].asJava
em: java.util.Map[String,AnyRef] = {}
scala> import java.nio.file.FileSystems
import java.nio.file.FileSystems
scala> FileSystems.
getDefault getFileSystem newFileSystem
scala> FileSystems.newFileSystem(uri, em, $intp.classLoader)
res1: java.nio.file.FileSystem = com.acme.FlakeyFileSystemProvider$FlakeyFileSystem#2553dcc0
I need to consume an xmlrpc service from Scala, and so far it looks like my only option is the Apache XML-RPC library.
I added this dependency to my Build.scala:
"org.apache.xmlrpc" % "xmlrpc" % "3.1.3"
and sbt reported no problem in downloading the library. However, I don't know how to go about actually accessing the library.
val xml = org.apache.xmlrpc.XmlRpcClient("http://foo") wouldn't compile
and
import org.apache.xmlrpc._
reported that object xmlrpc was not a member of package org.apache.
What would be the correct package to import?
(Or, is there a better library for XmlRpc from Scala?)
Try
"org.apache.xmlrpc" % "xmlrpc-client" % "3.1.3"
and so :
class XmlRpc(val serverURL: String) {
import org.apache.xmlrpc.client.XmlRpcClient
import org.apache.xmlrpc.client.XmlRpcClientConfigImpl
import org.apache.xmlrpc.client.XmlRpcSunHttpTransportFactory
import java.net.URL
val config = new XmlRpcClientConfigImpl();
config.setServerURL(new URL(serverURL));
config.setEncoding("ISO-8859-1");
val client = new XmlRpcClient();
client.setTransportFactory(new XmlRpcSunHttpTransportFactory(client));
client.setConfig(config);
client.execute(...)
}
There is a good module for this kind of tasks:
https://github.com/jvican/xmlrpc