Where to get Ejabberd api url and host name - xmpp

I am going to develop a chat application using ejabberd using ReactJs. I installed ejabberd on our server. I followed the API documentation from the below link.
https://docs.ejabberd.im/developer/ejabberd-api/admin-api/#registered-users
I want to try any api in postman before implementing. But I didn't get the API URL and host name from any of the document.
My ejabberd server admin URL is http://192.168.5.242:5280/admin
Also, I wish to use https://www.npmjs.com/package/ejabberd. But there I can see the usage of host name.
I tried so many ports instead of 5280. But not working for me.

But I didn't get the API URL and host name from any of the document.
You define the port number in the ejabberd configuration file, in the 'listen' section. For example, in my case I use 5282 for mod_http_api, and path /api:
-
port: 5282
module: ejabberd_http
request_handlers:
"/api": mod_http_api
"/bosh": mod_bosh
"/oauth": ejabberd_oauth
"/rest": mod_rest
My ejabberd server admin URL is http://192.168.5.242:5280/admin
Then, if you add the lines that I have, your url for mod_http_api would be http://192.168.5.242:5282/api

As an example call, I use this php script:
<?php
$url='localhost:5282/api/registered_users/';
$login="key";
$password='secret';
$request=null;
$info=array(
"host"=>"localhost"
);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($info));
$output=curl_exec($ch);
curl_close($ch);
print_r($output);
?>
This is the result of the query:
$ php test.php
["user1","user2"]
Sniffing the network traffic, this is the query:
POST /api/registered_users/ HTTP/1.1
Host: localhost:5282
Accept: */*
Content-Length: 20
Content-Type: application/x-www-form-urlencoded
{"host":"localhost"}
and this is the response:
HTTP/1.1 200 OK
Content-Length: 17
Content-Type: application/json
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Content-Type, Authorization, X-Admin
["user1","user2"]

Related

HttpClient Get works when Fiddler is on and fails with 403 when fiddler is not running

I have seen many posts but none of the solution is helping me. I have a simple get to one of the Amazon new webservices. The client is HttpClient.
When fiddler is on it passes and gets me result. When fiddler is not running it fails. Since it was not failing when fiddler was on, i installed another tool "Http Debugger" to see the failure and i could not make any difference in both the request (The pass and the failure). Bothe of them are given below
Failure Capture:
GET /sellers/v1/marketplaceParticipations HTTP/1.1
Accept: application/json
x-amz-access-token: Atza|IwEBIN3hOVtNi1xM47txtHcqXi5A3C960AypB7pkWYCxEo7lNiL9EFR-1b_EoD6PQ8lzAXgM4zujF0OBv0NS7sYQ9bWqPMDhHFt8kgvdmlmk3==jrinknrO6PYlZgmFLRGn1Hzmvgldmnj4973bjkfnbkldlcvld vc0BmuqKHUreonrWQxFO49u0yoIiNHVzSxHP0Wo4nWKW5pdd5Fj73gYxnZQQeYF5EAy8lKDCLtndTnJCdlrv5Kk8JK8iFD_H7h3FF5H4gNyTx3uIHxMaU8OkLz_IigsCTNQHwljnubhQlR9aK0J6lRbb0QfOQ4BAT_e1GOKDkShu-U5OdchdF5qNUkKU
user-agent: MSolution/1.0.0.0
X-Amz-Date: 20201201T224620Z
Authorization: AWS4-HMAC-SHA256 Credential=AKIA5SU6JNBJKDAQB76813QI6V/20201201/eu-west-1/execute-api/aws4_request, SignedHeaders=accept;host;user-agent;x-amz-access-token;x-amz-date, Signature=71dd12ee0eaf33cd142dwr242424e91cb5c4bfd6fd4f46d929d
Host: sellingpartnerapi-eu.amazon.com
Connection: Keep-Alive
The error is Forbiden (Just this)
The success capture is (Fiddler running):
GET /sellers/v1/marketplaceParticipations HTTP/1.1
Accept: application/json
x-amz-access-token: Atza|IwEBIMfdZDDaca8HrDGIPft-HQs3Vzi75I4Bk9iNKfsHTkfwsfsfcsvcsaP86DqKkoZE37TiDr3XvmD_vdvavcUE9TzdXhf2jjuULL04keBHI_XYrnTnhXaCPE0gUAc8HvIiW7OXSERz_3RlS9R-nu2lTo_jqzaz0mbUaa-evavaVAVLauh2Ue7Io8pE1tThRTcqM60igPcrBViAUptTAsq-IL5ZT7hOfbNJTJ31GeN8e8IzjkWfe9n4l7B799VM1bJnC-D_alZ2J0HHj4cBNjd3RzAEvavav3fGWkW5iH2_MZ3IyaxYnslvSzNH4h8tvay87OywkkxVUKIn
user-agent: MSolution/1.0.0.0
X-Amz-Date: 20201201T224820Z
Authorization: AWS4-HMAC-SHA256 Credential=AKIAYWBDAQ7XYIPQI6V/20201201/eu-west-1/execute-api/aws4_request, SignedHeaders=accept;host;user-agent;x-amz-access-token;x-amz-date, Signature=7e4db1c114219546848eaffnvclknslnlcvs63c9e0af50edc3cdbe7231c9b
Host: sellingpartnerapi-eu.amazon.com
I have used:
ServicePointManager.ServerCertificateValidationCallback +=
(senderSP, certificate, chain, sslPolicyErrors) => true;
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12 | SecurityProtocolType.Ssl3;
Not sure what is causing this but has anyone seen this ?
The same call through "RestSharp" works, but i donot wnat to use rest sharp.
Shankar

404 redirect to another server/domain

I'm looking for a solution with redirects to another domain if the response from HTTP server was 404.
acl not_found status 404
acl found_ceph status 200
use_backend minio_s3 rsprep ^HTTP/1.1\ 404\ (.*)$ HTTP/1.1\ 302\ Found\nLocation:\ / if not_found
use_backend ceph if found_ceph
But still not working, this rule goes to minio_s3 backend.
Thank you for you advice.
When the response from this backend has status 404, first add a Location header that will send the browser to example.com with the original URI intact, then set the status code to 302 so the browser executes a redirect.
backend my-backend
mode http
server my-server 203.0.113.113:80 check inter 60000 rise 1 fall 2
http-response set-header Location http://example.com%[capture.req.uri] if { status eq 404 }
http-response set-status 302 if { status eq 404 }
Test:
$ curl -v http://example.org/pics/funny/cat.jpg
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to example.org (127.0.0.1) port 80 (#0)
> GET /pics/funny/cat.jpg HTTP/1.1
> User-Agent: curl/7.35.0
> Host: example.org
> Accept: */*
The actual back-end returns 404, but we don't see it. Instead...
< HTTP/1.1 302 Moved Temporarily
< Last-Modified: Thu, 04 Aug 2016 16:59:51 GMT
< Content-Type: text/html
< Content-Length: 332
< Date: Sat, 07 Oct 2017 00:03:22 GMT
< Location: http://example.com/pics/funny/cat.jpg
The response body from the back-end's 404 error page will still be sent to the browser, but -- as it turns out -- the browser will not display it, so no harm done. This requires HAProxy 1.6 or later.
#Michael's answer is rather good, but isno't working for me for two reasons:
Mainly because the %[capture.req.uri] tag resolves to empty (HA Proxy 1.7.9 Docker image)
Also due to the fact that the original assumptions are incomplete, due to the fact that the frontend section is missing...
So I struggled for a while, as you find all kinds of answers on the Internet, between those guys who swear the 404 logic should be put in the frontend, vs those who choose the backend, and any possible kind of tags...
This is my answer, which works for me.
My use case is that if an image is not found on the backend behind HA Proxy, then an S3 bucket is checked.
The entry point is: https://myhostname:8080/path/to/image.jpeg
defaults
mode http
global
log 127.0.0.1:514 local0 debug
frontend come_on_over_here
bind :8080
# The following two lines are here to save values while we have access to them. They won't be available in the backend section.
http-request set-var(txn.path) path
http-request set-var(txn.query) query
http-request replace-value Host localhost:8080 dev.local:80
default_backend onprems_or_s3_be
backend onprems_or_s3_be
log global
acl path_photos var(txn.path) -m beg /path/prefix/i/want/to/strip/off
acl p_ext_jpeg var(txn.path) -m end .jpeg
acl is404 status eq 404
http-response set-header Location https://mybucket.s3.eu-west-3.amazonaws.com"%[var(txn.path),regsub(^/path_prefix_i_want_to_strip_off/,/)]?%[var(txn.query)]" if path_photos p_ext_jpeg is404
http-response set-status 301 if is404
server onprems_server dev.local:80 check

How to imitate in Perl the behaviour of wget or curl

I am a newbie in Perl and in ActiveMQ.
I have downloaded this Perl nagios program to check the ActiveMQ queues. The problem is the program exist in the main Perl line:
my $page = get "http://admin:admin\#$address:$port/admin/xml/queues.jsp" or die "Cannot get XML file: $!\n";;
I substituted that line with this other lines to check the return code:
my $ua = LWP::UserAgent->new;
$ua->timeout(10);
$ua->env_proxy;
my $page = $ua->get("http://admin:admin\#$address:$port/admin/xml/queues.jsp");
if ($page->is_success) {
print $page->decoded_content; # or whatever
}
else {
die $page->status_line;
}
Now, it reports:
401 Unauthorized
But wget is still able to download the page:
Connecting to 127.0.0.1:8161... connected.
HTTP request sent, awaiting response... 401 Unauthorized
Reusing existing connection to 127.0.0.1:8161.
HTTP request sent, awaiting response... 200 OK
Length: 2430 (2.4K) [text/xml]
Saving to: `queues.jsp'
How can I config the UserAgent to make the get call imitate the wget behaviour?
Do you know another script/program to monitor de ActiveMQ queues?
Is there any way to get the queues values in plain text? Then I would write down my own bash script.
update 1
As #mob requested, here it is the output of wget --debug
DEBUG output created by Wget 1.12 on linux-gnu.
--2017-09-06 19:27:15-- http://admin:*password*#127.0.0.1:8161/admin/xml/queues.jsp
Connecting to 127.0.0.1:8161... connected.
Created socket 3.
Releasing 0x0000000002586c10 (new refcount 0).
Deleting unused 0x0000000002586c10.
---request begin---
GET /admin/xml/queues.jsp HTTP/1.0
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Host: 127.0.0.1:8161
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 401 Unauthorized
WWW-Authenticate: basic realm="ActiveMQRealm"
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: text/html;charset=ISO-8859-1
Content-Length: 1293
Connection: keep-alive
Server: Jetty(7.6.9.v20130131)
---response end---
401 Unauthorized
Registered socket 3 for persistent reuse.
Skipping 1293 bytes of body: [<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 401 Unauthorized</title>
</head>
<body>
<h2>HTTP ERROR: 401</h2>
<p>Problem accessing /admin/xml/queues.jsp. Reason:
<pre> Unauthorized</pre></p>
<hr /><i><small>Powered by Jetty://</small></i>
</body>
</html>
] done.
Reusing existing connection to 127.0.0.1:8161.
Reusing fd 3.
---request begin---
GET /admin/xml/queues.jsp HTTP/1.0
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Host: 127.0.0.1:8161
Connection: Keep-Alive
Authorization: Basic xxxxxxxxxxxxxxxxxxxx
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 200 OK
Set-Cookie: JSESSIONID=o7kaw1kbzcy91dozx82c8dq2j;Path=/admin
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/xml;charset=ISO-8859-1
Content-Length: 2430
Connection: keep-alive
Server: Jetty(7.6.9.v20130131)
---response end---
200 OK
Stored cookie 127.0.0.1 8161 /admin <session> <insecure> [expiry none] JSESSIONID o7kaw1kbzcy91dozx82c8dq2j
Length: 2430 (2.4K) [text/xml]
Saving to: `queues.jsp'
100%[================================================================================>] 2,430 --.-K/s in 0s
2017-09-06 19:27:15 (395 MB/s) - `queues.jsp' saved [2430/2430]
The only difference in the ---request begin--- sections of both attempts is
Authorization: Basic xxxxxxxxxxxxxxxxxxxx
found only in the second try.
LWP::UserAgent does not parse the username:password part of the URL. I suspect this is to discourage the insecure practice of putting the username:password in the URL where it will then be easily stolen from programs and in server logs.
You can override get_basic_credentials to pull the username and password out of the URL. This doesn't solve the security problem.
Or you can call authorization_basic on an HTTP::Request object to set the username and password for this particular request.
my $ua = LWP::UserAgent->new;
my $req = HTTP::Request->new(GET => $url);
$req->authorization_basic($user, $password);
my $res = $ua->request($req);
Or you can call credentials on the UserAgent to set up passwords for various host/port combinations rather than for just one request. This is like storing your passwords in the browser, where $ua is the browser.
my $ua = LWP::UserAgent->new;
$ua->credentials("$host:$port", $realm, $user, $password);
my $req = $ua->get($url);
Or you can switch to the less featureful but better designed HTTP::Tiny or the slightly more featureful HTTP::Tiny::UA. It will parse and use the username:password part of the URL.
use HTTP::Tiny;
my $ua = HTTP::Tiny->new;
my $res = $ua->get($url);

Neteller REST API gives an error

I've been working with neteller rest api and I came across an issue.
I am receiving this response: { "error": "invalid_client" }
My code is
$username = '**********';
$password = '*********************************';
$curl = curl_init();
curl_setopt($curl, CURLOPT_POST, 1);
curl_setopt($curl, CURLOPT_URL, "https://test.api.neteller.com/v1/oauth2/token?grant_type=client_credentials");
curl_setopt($curl, CURLOPT_USERPWD, "$username:$password");
curl_setopt($curl, CURLOPT_HTTPHEADER, array("Content-Type:application/json", "Cache-Control:no-cache"));
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
$serverOutput = curl_exec($curl);
echo $serverOutput;
The documentation says:
Client authentication failed (e.g., unknown client, no client authentication included, or
unsupported authentication method). The authorization server MAY return an HTTP 401
(Unauthorized) status code to indicate which HTTP authentication schemes are
supported. If the client attempted to authenticate via the "Authorization" request header
field, the authorization server MUST respond with an HTTP 401 (Unauthorized) status
code and include the "WWW-Authenticate" response header field matching the
authentication scheme used by the client.
But I'm not sure I completely understand this..
I've tried every possible solution that I found online, but nothing works.. Is there something wrong with my CURL?
Thanks for your time.
You get this error message if your IP is blocked. Log in to the Neteller TEST merchant site (test.merchant.neteller.com). You will need to email support to get a user if you haven't already. Go to Developer / API Settings and check that the APIs are enabled and that your IPs are added.
You need to do the same thing for production (merchant.neteller.com).
It might be a header issue.
Try this as your content type:
application/x-www-form-urlencoded
This should probably solve it:
$data = array("scope" => "default");
curl_setopt($curl, CURLOPT_POSTFIELDS, $data);

Warning: file_get_contents(https://graph.facebook.com/me?access_token=)

Warning: file_get_contents(https://graph.facebook.com/me?access_token=) [function.file- get-contents]: failed to open stream: HTTP request failed! HTTP/1.0 400 Bad Request in /var/www/dsg/signed_request.php on line 25
I have checked and set allow_url_fopen to on in my php.ini and also made sure allow_url_fopen is on when I run phpinfo();. I am still getting the error as listed above. Does anyone know if this can be made to work somehow? Perhaps has some converted to an alternative?
You can use curl, which is actually what should be used for network requests, not file_get_contents. I don't know why Facebook started using that function in there examples. Curl has error handling and will follow redirects if you want it to, so you can figure out exactly what is happing.
You can create your own one line function like this
function viacurl($location){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $location);
curl_setopt($ch, CURLOPT_HEADER, false);
$out=curl_exec($ch);
curl_close($ch);
return rtrim($out,1);
}
And use it like this :
$response = viacurl($url);