Vertx not able to resolve host and not able to download large file using webclient - webclient

I am trying to download file from server but getting an error
Checked some solution where i found need to disable the dnsresolver but that also don't work for me.
Here is the code to disable dnsresolver
System.setProperty("vertx.disableDnsResolver", "true");
Source code :-
WebClientOptions webClientOptions = new WebClientOptions() .setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true);
WebClient webClient = WebClient.create(vertx,webClientOptions);
final HttpRequest<Buffer> abs = webClient.get("https://downloads.dell.com","/xyz.exe");
abs.send(responseHandler ->{
if (responseHandler.succeeded()){
String temp = vertx.fileSystem().createTempFileBlocking("", "");
vertx.fileSystem().writeFileBlocking(temp, responseHandler.result().body());
routingContext.response().end("Success");
}else{
log.error("Error: {}",responseHandler.cause());
routingContext.response().end("failure");
}
});
Here is an Exception:-
java.net.UnknownHostException: failed to resolve 'https://downloads.dell.com'. Exceeded max queries per resolve 4
at io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:927) [netty-resolver-dns-4.1.49.Final.jar:4.1.49.Final]
at io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:886) [netty-resolver-dns-4.1.49.Final.jar:4.1.49.Final]
at io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:358) [netty-resolver-dns-4.1.49.Final.jar:4.1.49.Final]
at io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:858) [netty-resolver-dns-4.1.49.Final.jar:4.1.49.Final]
at io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:358) [netty-resolver-dns-4.1.49.Final.jar:4.1.49.Final]
at io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:858) [netty-resolver-dns-4.1.49.Final.jar:4.1.49.Final]
at io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:358) [netty-resolver-dns-4.1.49.Final.jar:4.1.49.Final]
at io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:858) [netty-resolver-dns-4.1.49.Final.jar:4.1.49.Final]
at io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:358) [netty-resolver-dns-4.1.49.Final.jar:4.1.49.Final]
at io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:1001) [netty-resolver-dns-4.1.49.Final.jar:4.1.49.Final]
at io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:878) [netty-resolver-dns-4.1.49.Final.jar:4.1.49.Final]
at io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:358) [netty-resolver-dns-4.1.49.Final.jar:4.1.49.Final]
at io.netty.resolver.dns.DnsResolveContext.onResponse(DnsResolveContext.java:545) [netty-resolver-dns-4.1.49.Final.jar:4.1.49.Final]
Update :-
Using getAbs() I am able to download file around 15 to 20 mb, but In case of file size more than 500mb Not able to download

The WebClient.get(String, String) method expects a host name and a path, but you provide a URL as first parameter.
If you want to use a URL, use the WebClient.getAbs(String) method:
final HttpRequest<Buffer> abs = webClient.getAbs("https://downloads.dell.com/xyz.exe");

Its resolving for me just fine
fun main() {
val vertx = Vertx.vertx()
val webClientOptions = WebClientOptions().setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true)
val webClient = WebClient.create(vertx, webClientOptions)
val abs = webClient.getAbs("https://downloads.dell.com/xyz.exe")
abs.send().onComplete { println(it.failed()) }.onSuccess { println(it.statusMessage()) }
}
// prints
// false
// Not Found

Related

akka.http.scaladsl.model.ParsingException: Unexpected end of multipart entity while uploading a large file to S3 using akka http

I am trying to upload a large file (90 MB for now) to S3 using Akka HTTP with Alpakka S3 connector. It is working fine for small files (25 MB) but when I try to upload large file (90 MB), I got the following error:
akka.http.scaladsl.model.ParsingException: Unexpected end of multipart entity
at akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$1.applyOrElse(MultipartUnmarshallers.scala:108)
at akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$1.applyOrElse(MultipartUnmarshallers.scala:103)
at akka.stream.impl.fusing.Collect$$anon$6.$anonfun$wrappedPf$1(Ops.scala:227)
at akka.stream.impl.fusing.SupervisedGraphStageLogic.withSupervision(Ops.scala:186)
at akka.stream.impl.fusing.Collect$$anon$6.onPush(Ops.scala:229)
at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:523)
at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:510)
at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:376)
at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:606)
at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:485)
at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:581)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:749)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$shortCircuitBatch(ActorGraphInterpreter.scala:739)
at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:765)
at akka.actor.Actor.aroundReceive(Actor.scala:539)
at akka.actor.Actor.aroundReceive$(Actor.scala:537)
at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:671)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:614)
at akka.actor.ActorCell.invoke(ActorCell.scala:583)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
at akka.dispatch.Mailbox.run(Mailbox.scala:229)
at akka.dispatch.Mailbox.exec(Mailbox.scala:241)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Although, I get the success message at the end but file does not uploaded completely. It gets upload of 45-50 MB only.
I am using the below code:
S3Utility.scala
class S3Utility(implicit as: ActorSystem, m: Materializer) {
private val bucketName = "test"
def sink(fileInfo: FileInfo): Sink[ByteString, Future[MultipartUploadResult]] = {
val fileName = fileInfo.fileName
S3.multipartUpload(bucketName, fileName)
}
}
Routes:
def uploadLargeFile: Route =
post {
path("import" / "file") {
extractMaterializer { implicit materializer =>
withoutSizeLimit {
fileUpload("file") {
case (metadata, byteSource) =>
logger.info(s"Request received to import large file: ${metadata.fileName}")
val uploadFuture = byteSource.runWith(s3Utility.sink(metadata))
onComplete(uploadFuture) {
case Success(result) =>
logger.info(s"Successfully uploaded file")
complete(StatusCodes.OK)
case Failure(ex) =>
println(ex, "Error in uploading file")
complete(StatusCodes.FailedDependency, ex.getMessage)
}
}
}
}
}
}
Any help would be appraciated. Thanks
Strategy 1
Can you break the file into smaller chunks and retry, here is the sample code:
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("some-kind-of-endpoint"))
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("user", "pass")))
.disableChunkedEncoding()
.withPathStyleAccessEnabled(true)
.build();
// Create a list of UploadPartResponse objects. You get one of these
// for each part upload.
List<PartETag> partETags = new ArrayList<PartETag>();
// Step 1: Initialize.
InitiateMultipartUploadRequest initRequest = new
InitiateMultipartUploadRequest("bucket", "key");
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);
File file = new File("filepath");
long contentLength = file.length();
long partSize = 5242880; // Set part size to 5 MB.
try {
// Step 2: Upload parts.
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
// Last part can be less than 5 MB. Adjust part size.
partSize = Math.min(partSize, (contentLength - filePosition));
// Create a request to upload a part.
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName("bucket").withKey("key")
.withUploadId(initResponse.getUploadId()).withPartNumber(i)
.withFileOffset(filePosition)
.withFile(file)
.withPartSize(partSize);
// Upload part and add response to our list.
partETags.add(
s3Client.uploadPart(uploadRequest).getPartETag());
filePosition += partSize;
}
// Step 3: Complete.
CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(
"bucket",
"key",
initResponse.getUploadId(),
partETags);
s3Client.completeMultipartUpload(compRequest);
} catch (Exception e) {
s3Client.abortMultipartUpload(new AbortMultipartUploadRequest(
"bucket", "key", initResponse.getUploadId()));
}
Strategy 2
Increase the idle-timeout of the Akka HTTP server (just set it to infinite), like the following:
akka.http.server.idle-timeout=infinite
This would increase the time period for which the server expects to be idle. By default its value is 60 seconds. And if the server is not able to upload the file within that time period, it will close the connection and throw "Unexpected end of multipart entity" error.

ejabberd - custom iq handlers

I have created simple module. here is the code
-module(mod_conversations).
-behaviour(gen_mod).
-export([start/2, stop/1, process_local_iq/3]).
-include("ejabberd.hrl").
-include("logger.hrl").
-include("jlib.hrl").
-define(IQ_CUSTOM, <<"jabber:iq:conversations">>).
start(Host, _) ->
gen_iq_handler:add_iq_handler(ejabberd_local, Host, ?IQ_CUSTOM, ?MODULE, process_local_iq, one_queue),
ok.
stop(Host) ->
gen_iq_handler:remove_iq_handler(ejabberd_local, Host, ?IQ_CUSTOM),
ok.
process_local_iq(From,_ ,IQ) ->
IQ#iq{type = result, sub_el = [{xmlelement, <<"value">>, [], [{xmlcdata, <<"Testing...">>}]}]}.
When i make request with strophe.js var iq = $iq({type: 'get', to: 'some host'}).c('query', {xmlns: 'jabber:iq:conversations'}); connection.sendIQ(iq);
server disconnects me.I have tried to find simillar issues but some of them was more problematic. please help me to solve this issue
You must not use xmlelement tuples in recent ejabberd versions, the correct code should look like this:
IQ#iq{type = result,
sub_el = [#xmlel{name = <<"value">>,
children = [{xmlcdata, <<"Testing...">>}]}]}.

creating custom components and running with NoFlo

I am creating a custom component which interfaces with MongoDB. I wrote a CoffeeScript file which just connects to MongoDB, and stored it at noflo/components folder.
MongoBase.coffee
noflo = require "noflo"
mongodb = require "mongodb"
url = require "url"
class exports.MongoBase extends noflo.Component
constructor: ->
super
#inPorts =
url: new noflo.Port()
#inPorts.url.on "data", (data) =>
try
#parseConnectionString(data)
#MongoClient = mongodb.MongoClient;
#MongoClient.connect #serverUrl, (err, db) ->
if err
console.log("Error in connecting to MongoDB")
else
console.log("Connected to MongoDB")
catch error
console.log(error)
parseConnectionString: (connectionString) =>
databaseUrl = try
url.parse(connectionString)
catch error
console.log(error)
[..., #serverUrl, #databaseName] = databaseUrl.split('/')
#serverUrl = "mongo://" + #serverUrl
I added the following entry to component.json
"MongoBase": "components/MongoBase.coffee"
In addition to this, I created a mongo.fbp file to check the flow of the component. The FBP file has the following code:
'mongodb://localhost:27017/test' -> url DocReader(MongoBase)
On running noflo mongo.fbp, I get the following error:
/home/saurabh/workspace/noflo/node_modules/fbp/lib/fbp.js:1628
edges.forEach(function (o, i) {
^
TypeError: Object #<Object> has no method 'forEach'
at Object.parser.registerEdges (/home/saurabh/workspace/noflo/node_modules/fbp/lib/fbp.js:1628:15)
at peg$c25 (/home/saurabh/workspace/noflo/node_modules/fbp/lib/fbp.js:60:50)
at peg$parseline (/home/saurabh/workspace/noflo/node_modules/fbp/lib/fbp.js:749:30)
at peg$parsestart (/home/saurabh/workspace/noflo/node_modules/fbp/lib/fbp.js:282:12)
at Object.parse (/home/saurabh/workspace/noflo/node_modules/fbp/lib/fbp.js:1650:18)
at Object.exports.loadFBP (/home/saurabh/workspace/noflo/lib/Graph.js:1065:33)
at /home/saurabh/workspace/noflo/lib/Graph.js:1116:24
at fs.js:268:14
at Object.oncomplete (fs.js:107:15)
Is there something wrong with my code, or the steps I am using to run the code?
You may have figured this out already, as it's been several month's since you asked, but I believe you need to add the getComponent() method to your class before you export it.
noflo = require "noflo"
mongodb = require "mongodb"
url = require "url"
class MongoBase extends noflo.Component
constructor: ->
super
#inPorts =
url: new noflo.Port()
#inPorts.url.on "data", (data) =>
try
#parseConnectionString(data)
#MongoClient = mongodb.MongoClient;
#MongoClient.connect #serverUrl, (err, db) ->
if err
console.log("Error in connecting to MongoDB")
else
console.log("Connected to MongoDB")
catch error
console.log(error)
parseConnectionString: (connectionString) =>
databaseUrl = try
url.parse(connectionString)
catch error
console.log(error)
[..., #serverUrl, #databaseName] = databaseUrl.split('/')
#serverUrl = "mongo://" + #serverUrl
MongoBase.getComponent = -> new MongoBase
exports.MongoBase = MongoBase
Additionally, in your graph for the component loader to work you need to specify the package your component lives in. If your package.json/component.json have a name entry like "name": "mongo-base" then you'd have to specify this in the FBP graph, like so:
'mongodb://localhost:27017/test' -> url DocReader(mongo-base/MongoBase)
N.B.: The loader clobbers any instances of 'noflo-' in the package name, so this needs to be taken into account. E.g. the name 'noflo-mongo' would get turned into just 'mongo', so when invoking the package's components you'd write in the fbp DocReader(mongo/MongoBase) and not DocReader(noflo-mongo/MongoBase).

Matlab urlread2 - HTTP response code: 415 for URL

I am attempting to access the betfair API using Matlab and the urlread2 function available here.
EDIT: I have posted this problem on Freelancer if anyone can help with it: tinyurl.../pa7sblb
The documentation for the betfair API I am following is this getting started guide. I have successfully logged in and kept the session open using these codes: (I am getting a success response)
%% Login and get Token
url = 'https://identitysso.betfair.com/api/login';
params = {'username' '******' 'password' '******'};
header1 = http_createHeader('X-Application','*****');
header2 = http_createHeader('Accept','application/json');
header = [header1, header2];
[paramString] = http_paramsToString(params)
[login,extras] = urlread2(url,'POST',paramString,header)
login = loadjson(login)
token = login.token
%% Keep Alive
disp('Keep Session Alive')
url_alive = 'https://identitysso.betfair.com/api/keepAlive';
header1 = http_createHeader('X-Application','******');
header2 = http_createHeader('Accept','application/json');
header3 = http_createHeader('X-Authentication',token');
header_alive = [header1, header2, header3];
[keep_alive,extras] = urlread2(url_alive,'POST',[],header_alive);
keep_alive = loadjson(keep_alive);
keep_alive_status = keep_alive.status
My trouble starts when I am attempting to do the next step and load all available markets. I am trying to replicate this example code which is designed for Python
import requests
import json
endpoint = "https://api.betfair.com/exchange/betting/rest/v1.0/"
header = { 'X-Application' : 'APP_KEY_HERE', 'X-Authentication' : 'SESSION_TOKEN_HERE' ,'content-type' : 'application/json' }
json_req='{"filter":{ }}'
url = endpoint + "listEventTypes/"
response = requests.post(url, data=json_req, headers=header)
The code I am using for Matlab is below.
%% Get Markets
url = 'https://api.betfair.com/exchange/betting/rest/v1.0/listEventTypes/';
header_application = http_createHeader('X-Application','******');
header_authentication = http_createHeader('X-Authentication',token');
header_content = http_createHeader('content_type','application/json');
header_list = [header_application, header_authentication, header_content];
json_body = savejson('','filter: {}');
[list,extras] = urlread2(url_list,'POST',json_body,header_list)
I am having trouble with a http response code 415. I believe that the server cannot understand my parameter since the headings I have used with success previously.
Any help or advice would be greatly appreciated!
This is the error:
Response stream is undefined
below is a Java Error dump (truncated):
Error using urlread2 (line 217)
Java exception occurred:
java.io.IOException: Server returned HTTP response code: 415 for URL....
I looked at your problem and it seems to be caused by two things:
1) The content type should be expressed as 'content-type' and not 'content_type'
2) The savejson-function doesn't create an adequate json-string. If you use the json-request from the Python-script it works.
This code work for me:
%% Get Markets
url = 'https://api.betfair.com/exchange/betting/rest/v1.0/listEventTypes/';
header_application = http_createHeader('X-Application','*********');
header_authentication = http_createHeader('X-Authentication',token');
header_content = http_createHeader('content-type','application/json');
header_list = [header_application, header_authentication, header_content];
json_body = '{"filter":{ }}';
[list,extras] = urlread2(url,'POST',json_body,header_list)

Can´t bind mongodb service on Appfog

Hi i´m trying to bind mongodb service on my expressjs app with Appfog.
I have a config file like this:
config.js
var config = {}
config.dev = {};
config.prod = {};
//DEV
config.dev.host = "localhost";
config.dev.port = 3000;
config.dev.mdbhost = "localhost";
config.dev.mdbport = 27017;
config.dev.db = "detysi";
//PROD
config.prod.service_type = "mongo-1.8";
config.prod.json = process.env.VCAP_SERVICES ? JSON.parse(process.env.VCAP_SERVICES) : '';
config.prod.credentials = process.env.VCAP_SERVICES ? config.prod.json[config.prod.service_type][0]["credentials"] : null;
config.prod.mdbhost = config.prod.credentials["host"];
config.prod.mdbport = config.prod.credentials["port"];
config.prod.db = config.prod.credentials["db"];
config.prod.port = process.env.VCAP_APP_PORT || process.env.PORT;
module.exports = config;
And this is my mongodb conf depending of the environment
app.js
if ( process.env.VCAP_SERVICES ) {
server = new Server(config.prod.mdbhost, config.prod.mdbport, { auto_reconnect: true });
db = new Db(config.prod.db, server);
} else {
server = new Server(config.dev.mdbhost, config.dev.mdbport, { auto_reconnect: true });
db = new Db(config.dev.db, server);
}
I bind the service manually from https://console.appfog.com, my app is using the infra AWS Virginia. I also use MongoHQ addon to create one collection with two documents.
When i go to Windows console and write af update myapp it throws me next error:
Cannot read property '0' of undefined
This is cause process.env.VCAP_SERVICES is undefined.
I was investigating that and it can be that my mongodb service is incorrectly binded.
After that i tried to bind mongodb service from the windows console like below:
af bind-service mongodb myapp
But it throws me next error:
Service mongodb and App myapp are not on the same infra
At this point i don´t know what can i do.
I had the same problem.
What fixed it for me:
Go to: console>services>(re)start mongodb service
Run the same command again.
Push again.
It should work.