I am using scala 2.12 and have following dependencies in my build.sbt.
libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.10.1.0"
libraryDependencies += "io.confluent" % "kafka-avro-serializer" % "3.1.1"
libraryDependencies += "io.confluent" % "common-config" % "3.1.1"
libraryDependencies += "io.confluent" % "common-utils" % "3.1.1"
libraryDependencies += "io.confluent" % "kafka-schema-registry-client" % "3.1.1"
Thanks to this community, I am able to convert my raw data to required avro format.
We need to use the confluent libraries to serialize and send the data to the Kafka topics.
I am using the following properties and avro record.
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "io.confluent.kafka.serializers.KafkaAvroSerializer")
properties.put("schema.registry.url", "http://myschemahost:8081")
Just showing required snippet of code for brevity.
val producer = new KafkaProducer[String, GenericData.Record](properties)
val schema = new Schema.Parser().parse(new File(schemaFileName))
var avroRecord = new GenericData.Record(schema)
// code to populate record
// check output below to see the data
logger.info(s"${avroRecord.toString}\n")
producer.send(new ProducerRecord[String, GenericData.Record](topic, avroRecord), new ProducerCallback)
producer.flush()
producer.close()
Schema and Data as per the output.
{"name": "person","type": "record","fields": [{"name": "address","type": {"type" : "record","name" : "AddressUSRecord","fields" : [{"name": "streetaddress", "type": "string"},{"name": "city", "type":"string"}]}}]}
I am getting the following error while publishing to Kafka.
Error registering Avro schema:
org.apache.kafka.common.errors.SerializationException:
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
at [Source: (sun.net.www.protocol.http.HttpURLConnection$HttpInputStream); line: 1, column: 2]; error code: 50005
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:170)
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:187)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:238)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:230)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:225)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:59)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:91)
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:72)
at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:54)
at org.apache.kafka.common.serialization.Serializer.serialize(Serializer.java:60)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:877)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:839)
Based on schema and data, is anything missing? My record is correct ?
Also, I want to know how should I populate "avro" NULL from Scala? None doesn't work.
Any help will be appreciated. I am really stuck here.
UPDATE:
Thanks #cricket_007 for pointing out the issue. I do get following error:
2019-03-20 13:26:09.660 [application-akka.actor.default-dispatcher-5] INFO i.c.k.s.KafkaAvroSerializerConfig.logAll(169) - KafkaAvroSerializerConfig values:
schema.registry.url = [http://myhost:8081]
max.schemas.per.subject = 1000
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
at [Source: (sun.net.www.protocol.http.HttpURLConnection$HttpInputStream); line: 1, column: 2]; error code: 50005
However, When I use the same URL (http://myhost:8081) on my browser it works well. I can see the subjects, and other information.
But as soon as I use the client (Scala program above), it fails with above error.
I just checked with a sample code like below, it gives same issue.
val client = new OkHttpClient
val request = new Request.Builder().url("http://myhost:8081/subjects").build()
val output = client.newCall(request).execute().body().string()
logger.info(s"Subjects: ${output}\n")
I am getting connection refused for the schema registry URL.
Subjects: <HEAD><TITLE>Connection refused</TITLE></HEAD>
<BODY BGCOLOR="white" FGCOLOR="black"><H1>Connection refused</H1><HR>
<FONT FACE="Helvetica,Arial"><B>
Description: Connection refused</B></FONT>
<HR>
<!-- default "Connection refused" response (502) -->
</BODY>
So, wanted to check if I am missing anything. Same thing works when I run it on browser but simple code like above it fails.
That's an HTTP response parsing error. Seems your schema registry is not returning a JSON response, and rather some HTML starting with a < open tag.
You should check if the registry is really running at http://myschemahost:8081, and you can manually post your schema to it using the REST API to do the same actions as the serializer would.
Related
I am streaming data from mysql using Slick 3 and Akka Streams.
This is how I build my source
import slick.jdbc.MySQLProfile.api._
val enableJdbcStreaming: (java.sql.Statement) => Unit = {statement =>
if (statement.isWrapperFor(classOf[com.mysql.cj.jdbc.StatementImpl])) {
statement.unwrap(classOf[com.mysql.cj.jdbc.StatementImpl]).enableStreamingResults()
}
}
val query = Tables.Foo.filter(r => r.isActive === true)
.map(r => r.id).result.withStatementParameters(statementInit = enableJdbcStreaming)
Source.fromPublisher(db.stream(query))
My application runs for like 20 minutes and then shuts down with the following error
[error] Exception in thread "abhipool network timeout executor" java.lang.NullPointerException
[info] 15:31:46 INFO [HikariPool] - abhipool - Close initiated...
[error] at com.mysql.cj.mysqla.io.MysqlaProtocol.setSocketTimeout(MysqlaProtocol.java:1397)
[error] at com.mysql.cj.mysqla.MysqlaSession$1.run(MysqlaSession.java:401)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[error] at java.lang.Thread.run(Thread.java:745)
I have a feeling that because my query is running for a very long time there is some kind of timeout occurring which is initiating this shutdown.
My connection
mysql {
profile = "slick.jdbc.MySQLProfile$"
dataSourceClass = "slick.jdbc.DatabaseUrlDataSource"
properties {
driver = "com.mysql.cj.jdbc.Driver"
url = "jdbc:mysql://foo:3306/bar?useLegacyDatetimeCode=false&serverTimezone=America/Chicago"
user = "foo"
password = "bar"
}
connectionTimeout = 0
idleTimeout = 0
maxLifetime = 0
maxConnections = 40
minConnections = 10
poolName = "abhipool"
numThreads = 10
}
Dependencies
"com.typesafe.slick" %% "slick" % "3.2.1",
"com.typesafe.slick" %% "slick-hikaricp" % "3.2.1",
"mysql" % "mysql-connector-java" % "6.0.6",
How can I configure my application database connections so that even if my streaming application streams data for several days... it keeps running.
There is an extremely lengthy conversation about this same issue here but it doesn't tell me how to really fix this issue. This issues makes it totally impossible to write long running streaming tasks which use Mysql as a source.
You can configure the MySQL driver by adding parameters in the URL
url = "jdbc:mysql://foo:3306/bar?useLegacyDatetimeCode=false&serverTimezone=America/Chicago&socketTimeout=30000"
I put 30000 for the sake of the example, put the right value that fits your need
I'm trying to get around a CORS error for a simple "hello world" style REST API in Scala/Play 2.6.x and I have tried everything that I can think of at this point. As far as I can tell there is not a good solution or example to be found on the internet, so even if this should be an easy fix then anyone that has a good solution would really help me out by posting it in full. I am simply trying to send a post request from localhost:3000 (a react application using axios) to localhost:9000 where my Scala/Play framework lives.
THE ERRORS
The error that I am getting on the client-side is the following:
XMLHttpRequest cannot load http://localhost:9000/saveTest.
Response to preflight request doesn't pass access control check:
No 'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://localhost:3000' is therefore not allowed
access. The response had HTTP status code 403.
The error that I am getting on the server-side is
success] Compiled in 1s
--- (RELOAD) ---
[info] p.a.h.EnabledFilters - Enabled Filters
(see <https://www.playframework.com/documentation/latest/Filters>):
play.filters.csrf.CSRFFilter
play.filters.headers.SecurityHeadersFilter
play.filters.hosts.AllowedHostsFilter
play.filters.cors.CORSFilter
[info] play.api.Play - Application started (Dev)
[warn] p.f.c.CORSFilter - Invalid CORS
request;Origin=Some(http://localhost:3000);
Method=OPTIONS;Access-Control-Request-Headers=Some(content-type)
MY CODE
I have the following in my application.conf file
# https://www.playframework.com/documentation/latest/Configuration
play.filters.enabled += "play.filters.cors.CORSFilter"
play.filters.cors {
pathPrefixes = ["/"]
allowedOrigins = ["http://localhost:3000", ...]
allowedHttpMethods = ["GET", "POST", "PUT", "DELETE"]
allowedHttpHeaders = ["Accept"]
preflightMaxAge = 3 days
}
I've tried changing pathPrefixes to /saveTest (my endpoint), and tried changing allowedOrigins to simply 'https://localhost'. I've tried changing allowedHttpHeaders="Allow-access-control-allow-origin". I've tried setting allowedOrigins, allowedHttpMethods, and allowedHttpHeaders all to null which, according to the documentation (https://www.playframework.com/documentation/2.6.x/resources/confs/filters-helpers/reference.conf) should allow everything (as should pathPrefixes=["/"]
My build.sbt is the following, so it should be adding the filter to the libraryDependencies:
name := """scalaREST"""
organization := "com.example"
version := "1.0-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayScala)
scalaVersion := "2.12.2"
libraryDependencies += guice
libraryDependencies += "org.scalatestplus.play" %% "scalatestplus-play" % "3.1.0" % Test
libraryDependencies += filters
According to documentation available here: https://www.playframework.com/documentation/2.6.x/Filters#default-filters you can set the default filters like this:
import javax.inject.Inject
import play.filters.cors.CORSFilter
import play.api.http.{ DefaultHttpFilters, EnabledFilters }
class Filters #Inject()(enabledFilters: EnabledFilters, corsFilter: CORSFilter)
extends DefaultHttpFilters(enabledFilters.filters :+ corsFilter: _*)
I'm not sure exactly where that should go in my project - it doesn't say, but from other stackoverflow answers I kind of assume it should go in the root of my directory (that is /app). So that's where I put it.
Finally, there was one exotic stackoverflow response that said to put this class in my controllers and add it as a function to my OK responses
implicit class RichResult (result: Result) {
def enableCors = result.withHeaders(
"Access-Control-Allow-Origin" -> "*"
, "Access-Control-Allow-Methods" ->
"OPTIONS, GET, POST, PUT, DELETE, HEAD"
// OPTIONS for pre-flight
, "Access-Control-Allow-Headers" ->
"Accept, Content-Type, Origin, X-Json,
X-Prototype-Version, X-Requested-With"
//, "X-My-NonStd-Option"
, "Access-Control-Allow-Credentials" -> "true"
)
}
Needless to say, this did not work.
WRAP UP
Here is the backend for my current scala project.
https://github.com/patientplatypus/scalaproject1/tree/master/scalarest
Please, if you can, show a full working example of a CORS implementation - I cannot get anything I can find online to work. I will probably be submitting this as a documentation request to the Play Framework organization - this should not be nearly this difficult. Thank you.
Your preflight request fails because you have a Content-Type header set
Add content-type to allowedHttpHeaders in your application.conf like so
#application.conf
play.filters.cors {
#other cors configuration
allowedHttpHeaders = ["Accept", "Content-Type"]
}
I had this problem too and I added these code in application.conf
play.filters.enabled += "play.filters.cors.CORSFilter"
play.filters.cors {
allowedHttpMethods = ["GET", "HEAD", "POST"]
allowedHttpHeaders = ["Accept", "Content-Type"]"
}
and now everything is OK!
for more info
For playframework version 2.8.x , we can wrap the Response in a function as below -
def addCorsHeader (response : Result) : Result = {
response.withHeaders(
("Access-Control-Allow-Origin", "*"),
("Access-Control-Allow-Methods" , "GET,POST,OPTIONS,DELETE,PUT")
)
}
Now in the controller, wrap the Results using the above function.
val result = myService.swipeOut(inputParsed)
addCorsHeader(Ok(s"$result row successfully updated. Trip complete"))
}
else {
addCorsHeader(InternalServerError("POST body is mandatory"))
}
I am using "com.hierynomus" % "sshj" % "0.13.0" to transfer a file from local server to sftp server
File is getting tranferred without any problem but I am getting below error on console :
[error] n.s.s.t.TransportImpl - Dying because - {}
net.schmizz.sshj.transport.TransportException: Broken transport; encountered EOF
at net.schmizz.sshj.transport.Reader.run(Reader.java:58) ~[sshj-0.13.0.jar:na]
Code :
val ssh = new SSHClient()
ssh.addHostKeyVerifier(new PromiscuousVerifier())
ssh.connect(props.host)
ssh.authPassword(props.username, props.password)
val client=ssh.newSFTPClient()
client.put(localDirectory + fileName, remoteDirectory)
Please suggest me the cause for the same
I have a public available Amazon s3 resource (text file) and want to access it from spark. That means - I don't have any Amazon credentials - it works fine if I want to just download it:
val bucket = "<my-bucket>"
val key = "<my-key>"
val client = new AmazonS3Client
val o = client.getObject(bucket, key)
val content = o.getObjectContent // <= can be read and used as input stream
However, when I try to access the same resource from spark context
val conf = new SparkConf().setAppName("app").setMaster("local")
val sc = new SparkContext(conf)
val f = sc.textFile(s"s3a://$bucket/$key")
println(f.count())
I receive the following error with stacktrace:
Exception in thread "main" com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:221)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1781)
at org.apache.spark.rdd.RDD.count(RDD.scala:1099)
at com.example.Main$.main(Main.scala:14)
at com.example.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
I don't want to provide any AWS credentials - I just want to access resource anonymously (for now) - how to achieve this? I probably need to make it use something like AnonymousAWSCredentialsProvider - but how to put it inside spark or hadoop?
P.S. My build.sbt just in case
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.4.1",
"org.apache.hadoop" % "hadoop-aws" % "2.7.1"
)
UPDATED: After I did some investigations - I see the reason why itsn't working.
First of all, S3AFileSystem creates AWS client with the following order of credentials:
AWSCredentialsProviderChain credentials = new AWSCredentialsProviderChain(
new BasicAWSCredentialsProvider(accessKey, secretKey),
new InstanceProfileCredentialsProvider(),
new AnonymousAWSCredentialsProvider()
);
"accessKey" and "secretKey" values are taken from the spark conf instance (keys must be "fs.s3a.access.key" and "fs.s3a.secret.key" or org.apache.hadoop.fs.s3a.Constants.ACCESS_KEY and org.apache.hadoop.fs.s3a.Constants.SECRET_KEY constants, which is more convenient).
Second - you probably see that AnonymousAWSCredentialsProvider is the third option (last priority) - what could possible be wrong with that? See the implementation of AnonymousAWSCredentials:
public class AnonymousAWSCredentials implements AWSCredentials {
public String getAWSAccessKeyId() {
return null;
}
public String getAWSSecretKey() {
return null;
}
}
It simply returns null for both access key and secret key. Sounds reasonable. But look inside AWSCredentialsProviderChain:
AWSCredentials credentials = provider.getCredentials();
if (credentials.getAWSAccessKeyId() != null &&
credentials.getAWSSecretKey() != null) {
log.debug("Loading credentials from " + provider.toString());
lastUsedProvider = provider;
return credentials;
}
It doesn't choose provider in case both keys are null - that means anonymous credentials can't work. Looks like a bug inside aws-java-sdk-1.7.4. I tried to use latest version - but it's incompatible with hadoop-aws-2.7.1.
Any other ideas?
I personally never accessed public data from Spark. You can try to use dummy credentials, or to create ones just for this usage. Set them directly on the SparkConf object.
val sparkConf: SparkConf = ???
val accessKeyId: String = ???
val secretAccessKey: String = ???
sparkConf.set("spark.hadoop.fs.s3.awsAccessKeyId", accessKeyId)
sparkConf.set("spark.hadoop.fs.s3n.awsAccessKeyId", accessKeyId)
sparkConf.set("spark.hadoop.fs.s3.awsSecretAccessKey", secretAccessKey)
sparkConf.set("spark.hadoop.fs.s3n.awsSecretAccessKey", secretAccessKey)
As an alternative, read the documentation of DefaultAWSCredentialsProviderChain to see where the credentials are looked for. The list (order is important) is:
Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
Java System Properties - aws.accessKeyId and aws.secretKey
Credential profiles file at the default location (~/.aws/credentials) shared by all AWS SDKs and the AWS CLI
Instance profile credentials delivered through the Amazon EC2 metadata service
This is what helped me:
val session = SparkSession.builder()
.appName("App")
.master("local[*]")
.config("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider")
.getOrCreate()
val df = session.read.csv(filesFromS3:_*)
Versions:
"org.apache.spark" %% "spark-sql" % "2.4.0",
"org.apache.hadoop" % "hadoop-aws" % "2.8.5",
Documentation:
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Authentication_properties
It seems you can now use the aws.credentials.provider config key to use anonymous access given by org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider, which correctly special case the anonymous provider. However, you need a newer hadoop-aws than 2.7, which means you also need a spark installation without a bundled hadoop.
Here is how I did it colab:
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q http://apache.osuosl.org/spark/spark-2.3.1/spark-2.3.1-bin-without-hadoop.tgz
!tar xf spark-2.3.1-bin-without-hadoop.tgz
!pip install -q findspark
!pip install -q pyarrow
Now we install hadoop on the side, and set the output of hadoop classpath to SPARK_DIST_CLASSPATH, so spark can see it.
import os
!wget -q http://mirror.nbtelecom.com.br/apache/hadoop/common/hadoop-2.8.4/hadoop-2.8.4.tar.gz
!tar xf hadoop-2.8.4.tar.gz
os.environ['HADOOP_HOME']= '/content/hadoop-2.8.4'
os.environ["SPARK_DIST_CLASSPATH"] = "/content/hadoop-2.8.4/etc/hadoop:/content/hadoop-2.8.4/share/hadoop/common/lib/*:/content/hadoop-2.8.4/share/hadoop/common/*:/content/hadoop-2.8.4/share/hadoop/hdfs:/content/hadoop-2.8.4/share/hadoop/hdfs/lib/*:/content/hadoop-2.8.4/share/hadoop/hdfs/*:/content/hadoop-2.8.4/share/hadoop/yarn/lib/*:/content/hadoop-2.8.4/share/hadoop/yarn/*:/content/hadoop-2.8.4/share/hadoop/mapreduce/lib/*:/content/hadoop-2.8.4/share/hadoop/mapreduce/*:/content/hadoop-2.8.4/contrib/capacity-scheduler/*.jar"
Then we do like in https://mikestaszel.com/2018/03/07/apache-spark-on-google-colaboratory/, but add s3a and anonymous reading support, which is what the question is about.
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.3.1-bin-without-hadoop"
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.amazonaws:aws-java-sdk:1.10.6,org.apache.hadoop:hadoop-aws:2.8.4 --conf spark.sql.execution.arrow.enabled=true --conf spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider pyspark-shell'
And finally we can create the session.
import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]").getOrCreate()
The following is a valid query in a browser (e.g. Firefox):
http://www.freesound.org/api/sounds/search/?q=barking&api_key=074c0b328aea46adb3ee76f6918f8fae
yielding a JSON document:
{
"num_results": 610,
"sounds": [
{
"analysis_stats": "http://www.freesound.org/api/sounds/115536/analysis/",
"analysis_frames": "http://www.freesound.org/data/analysis/115/115536_1956076_frames.json",
"preview-hq-mp3": "http://www.freesound.org/data/previews/115/115536_1956076-hq.mp3",
"original_filename": "Two Barks.wav",
"tags": [
"animal",
"bark",
"barking",
"dog",
"effects",
...
I am trying to perform this query with Dispatch 0.9.4. Here's a build.sbt:
scalaVersion := "2.10.0"
libraryDependencies += "net.databinder.dispatch" %% "dispatch-core" % "0.9.4"
From sbt console, I do the following:
import dispatch._
val q = url("http://www.freesound.org/api/sounds/search")
.addQueryParameter("q", "barking")
.addQueryParameter("api_key", "074c0b328aea46adb3ee76f6918f8fae")
val res = Http(q OK as.String)
But the promise always completes with the following error:
res0: dispatch.Promise[String] = Promise(!Unexpected response status: 301!)
So what am I doing wrong? Here is the API documentation in case it helps.
You can enable redirect following with the configure method on the Http executor:
Http.configure(_ setFollowRedirects true)(q OK as.String)
You could also pull the Location out of the 301 response manually, but that's going to be a lot less convenient.