I want use mIRC to create a folder on Dropbot
But the echo always get this message:
400 Bad Request
The plain HTTP request was sent to HTTPS port
I have no idea why it happens.
This is my code:
alias dropboxCreateFolder {
sockclose dropboxCreateFolder
sockopen dropboxCreateFolder api.dropboxapi.com 443
}
ON *:SOCKOPEN:dropboxCreateFolder: {
if ($sockerr) { sockclose $sockname | halt }
var %data = {"path":"/myfile/songs"}
sockwrite -nt $sockname POST /2/files/create_folder HTTP/1.1
sockwrite -nt $sockname Host: api.dropboxapi.com
sockwrite -nt $sockname User-Agent: api-explorer-client
sockwrite -nt $sockname Authorization: Bearer Access_Token
sockwrite -nt $sockname Content-Type: application/json
sockwrite -nt $sockname $crlf $+ %data
}
ON *:SOCKREAD:dropboxCreateFolder: {
if ($sockerr) { sockclose $sockname | halt }
else {
var %sockreader | sockread %sockreader
echo -s %sockreader
}
}
1- You're not using SSL. This is a dropbox API requirement.
2- You're not setting a Content-Length header, you should really do this when POSTing.
alias dropboxCreateFolder {
sockclose dropboxCreateFolder
sockopen -e dropboxCreateFolder api.dropboxapi.com 443
}
ON *:SOCKOPEN:dropboxCreateFolder: {
if ($sockerr) { sockclose $sockname | halt }
var %data = {"path":"/myfile/songs"}
sockwrite -nt $sockname POST /2/files/create_folder HTTP/1.1
sockwrite -nt $sockname Host: api.dropboxapi.com
sockwrite -nt $sockname User-Agent: api-explorer-client
sockwrite -nt $sockname Authorization: Bearer Access_Token
sockwrite -nt $sockname Content-Type: application/json
sockwrite -nt $sockname Content-Length: $len(%data)
sockwrite -nt $sockname $crlf $+ %data
}
ON *:SOCKREAD:dropboxCreateFolder: {
if ($sockerr) { sockclose $sockname | halt }
else {
var %sockreader | sockread %sockreader
echo -s %sockreader
}
}
The error you showed tells you that you send a plain HTTP request to a HTTPS port. This means that api.dropboxapi.com expects SSL connections to be made on port 443, but you're attempting to create a non-SSL connection.
You need to specify the -e switch in your sockopen command to create an SSL connection instead.
Related
At first i authorize with this command:
curl -v https://api.sandbox.paypal.com/v1/oauth2/token \
-H "Accept: application/json" \
-H "Accept-Language: en_US" \
-u "client_id:secret" \
-d "grant_type=client_credentials"
Then i get access token. With access token i run this command:
curl -v -X GET https://api.sandbox.paypal.com/v1/invoicing/invoices?page=1 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer MY_TOKEN"
But i get error:
{"name":"AUTHORIZATION_ERROR","message":"Authorization error occurred.","information_link":"https://developer.paypal.com/docs/api/invoicing/#errors","debug_id":"75bc8ac7b89e1"}
Any ideas why? Most of the commands gives me same error. But this command works fine:
curl -v -X POST https://api.sandbox.paypal.com/v2/checkout/orders \
-H "Content-Type: application/json" \
-H "Authorization: Bearer MY-TOKEN" \
-d '{
"intent": "CAPTURE",
"purchase_units": [
{
"amount": {
"currency_code": "USD",
"value": "100.00"
}
}
]
}'
Any ideas, what i'm missing here?
Thanks, in advance.
So far the API calls that appear to help me in getting to my end goal of eventually uploading or viewing files and folders via the API are as follows:
POST https://demo.pydio.com/a/tree/admin/list
POST https://demo.pydio.com/a/workspace
GET https://demo.pydio.com/a/config/datasource
GET https://demo.pydio.com/a/config/virtualnodes/
Pydio Cells API Documentation
https://pydio.com/en/docs/developer-guide/cells-api
Cells provides S3 api to interact with data. The action upload/download with curl is divided into steps:
1. Get jwt
2. Upload/Download
You can use following bash file:
./cells-download.sh CELL_IP:PORT USER PASSWORD CLIENT_SECRET FILENAME WORKSPACE_SLUG/PATH NEW_NAME_AFTTER_DOWNLOAD
./cells-upload.sh CELL_IP:PORT USER PASSWORD CLIENT_SECRET ABS_PATH_FILE NEW_NAME WORKSPACE_SLUG/PATH
CLIENT_SECRET is found in /home/pydio/.config/pydio/cells/pydio.json >> dex >> staticClients >> Secret:
cells-download.sh
=============================
#!/bin/bash
HOST=$1
CELLS_FRONT="cells-front"
CELLS_FRONT_PWD=$4
ADMIN_NAME=$2
ADMIN_PWD=$3
FILE=$5
DEST=$6
NEW_NAME=$7
AUTH_STRING=$(echo cells-front:$CELLS_FRONT_PWD | base64)
AUTH_STRING=${AUTH_STRING::-4}
JWT=$(curl -s --request POST \
--url http://$HOST/auth/dex/token \
--header "Authorization: Basic $AUTH_STRING" \
--header 'Cache-Control: no-cache' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data "grant_type=password&username=$ADMIN_NAME&password=$ADMIN_PWD&scope=email%20profile%20pydio%20offline&nonce=123abcsfsdfdd" | jq '.id_token')
JWT=$(echo $JWT | sed "s/\"//g")
#!/bin/bash -e
#
# Copyright 2014 Tony Burns
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Upload a file to AWS S3.
file="${5}"
bucket="io"
prefix="io/$DEST"
region="us-east-1"
timestamp=$(date -u "+%Y-%m-%d %H:%M:%S")
content_type="application/octet-stream"
#signed_headers="date;host;x-amz-acl;x-amz-content-sha256;x-amz-date"
signed_headers="host;x-amz-content-sha256;x-amz-date"
if [[ $(uname) == "Darwin" ]]; then
iso_timestamp=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%Y%m%dT%H%M%SZ")
date_scope=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%Y%m%d")
date_header=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%a, %d %h %Y %T %Z")
else
iso_timestamp=$(date -ud "${timestamp}" "+%Y%m%dT%H%M%SZ")
date_scope=$(date -ud "${timestamp}" "+%Y%m%d")
date_header=$(date -ud "${timestamp}" "+%a, %d %h %Y %T %Z")
fi
payload_hash() {
# empty string
echo "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
}
canonical_request() {
echo "GET"
echo "/${prefix}/${file}"
echo ""
echo "host:$HOST"
echo "x-amz-content-sha256:$(payload_hash)"
echo "x-amz-date:${iso_timestamp}"
echo ""
echo "${signed_headers}"
printf "$(payload_hash)"
}
canonical_request_hash() {
local output=$(canonical_request | shasum -a 256)
echo "${output%% *}"
}
string_to_sign() {
echo "AWS4-HMAC-SHA256"
echo "${iso_timestamp}"
echo "${date_scope}/${region}/s3/aws4_request"
printf "$(canonical_request_hash)"
}
AWS_SECRET_ACCESS_KEY="gatewaysecret"
signature_key() {
local secret=$(printf "AWS4${AWS_SECRET_ACCESS_KEY}" | hex_key)
local date_key=$(printf ${date_scope} | hmac_sha256 "${secret}" | hex_key)
local region_key=$(printf ${region} | hmac_sha256 "${date_key}" | hex_key)
local service_key=$(printf "s3" | hmac_sha256 "${region_key}" | hex_key)
printf "aws4_request" | hmac_sha256 "${service_key}" | hex_key
}
hex_key() {
xxd -p -c 256
}
hmac_sha256() {
local hexkey=$1
openssl dgst -binary -sha256 -mac HMAC -macopt hexkey:${hexkey}
}
signature() {
string_to_sign | hmac_sha256 $(signature_key) | hex_key | sed "s/^.* //"
}
curl \
-H "Authorization: AWS4-HMAC-SHA256 Credential=${JWT}/${date_scope}/${region}/s3/aws4_request,SignedHeaders=${signed_headers},Signature=$(signature)" \
-H "Host: $HOST" \
-H "Date: ${date_header}" \
-H "x-amz-acl: public-read" \
-H 'Content-Type: application/octet-stream' \
-H "x-amz-content-sha256: $(payload_hash)" \
-H "x-amz-date: ${iso_timestamp}" \
"http://$HOST/${prefix}/${file}" --output $NEW_NAME
=============================
cells-upload.sh
=============================
#!/bin/bash
HOST=$1
CELLS_FRONT="cells-front"
CELLS_FRONT_PWD=$4
ADMIN_NAME=$2
ADMIN_PWD=$3
FILE=$5
NEW_NAME=$6
DEST=$7
AUTH_STRING=$(echo cells-front:$CELLS_FRONT_PWD | base64)
AUTH_STRING=${AUTH_STRING::-4}
JWT=$(curl -s --request POST \
--url http://$HOST/auth/dex/token \
--header "Authorization: Basic $AUTH_STRING" \
--header 'Cache-Control: no-cache' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data "grant_type=password&username=$ADMIN_NAME&password=$ADMIN_PWD&scope=email%20profile%20pydio%20offline&nonce=123abcsfsdfdd" | jq '.id_token')
JWT=$(echo $JWT | sed "s/\"//g")
#!/bin/bash -e
#
# Copyright 2014 Tony Burns
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Upload a file to AWS S3.
file="${5}"
bucket="io"
prefix="io/$DEST"
region="us-east-1"
timestamp=$(date -u "+%Y-%m-%d %H:%M:%S")
content_type="application/octet-stream"
#signed_headers="date;host;x-amz-acl;x-amz-content-sha256;x-amz-date"
signed_headers="content-type;host;x-amz-acl;x-amz-content-sha256;x-amz-date"
if [[ $(uname) == "Darwin" ]]; then
iso_timestamp=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%Y%m%dT%H%M%SZ")
date_scope=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%Y%m%d")
date_header=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%a, %d %h %Y %T %Z")
else
iso_timestamp=$(date -ud "${timestamp}" "+%Y%m%dT%H%M%SZ")
date_scope=$(date -ud "${timestamp}" "+%Y%m%d")
date_header=$(date -ud "${timestamp}" "+%a, %d %h %Y %T %Z")
fi
payload_hash() {
local output=$(shasum -ba 256 "$file")
echo "${output%% *}"
}
canonical_request() {
echo "PUT"
echo "/${prefix}/${NEW_NAME}"
echo ""
echo "content-type:${content_type}"
echo "host:$HOST"
echo "x-amz-acl:public-read"
echo "x-amz-content-sha256:$(payload_hash)"
echo "x-amz-date:${iso_timestamp}"
echo ""
echo "${signed_headers}"
printf "$(payload_hash)"
}
canonical_request_hash() {
local output=$(canonical_request | shasum -a 256)
echo "${output%% *}"
}
string_to_sign() {
echo "AWS4-HMAC-SHA256"
echo "${iso_timestamp}"
echo "${date_scope}/${region}/s3/aws4_request"
printf "$(canonical_request_hash)"
}
AWS_SECRET_ACCESS_KEY="gatewaysecret"
signature_key() {
local secret=$(printf "AWS4${AWS_SECRET_ACCESS_KEY}" | hex_key)
local date_key=$(printf ${date_scope} | hmac_sha256 "${secret}" | hex_key)
local region_key=$(printf ${region} | hmac_sha256 "${date_key}" | hex_key)
local service_key=$(printf "s3" | hmac_sha256 "${region_key}" | hex_key)
printf "aws4_request" | hmac_sha256 "${service_key}" | hex_key
}
hex_key() {
xxd -p -c 256
}
hmac_sha256() {
local hexkey=$1
openssl dgst -binary -sha256 -mac HMAC -macopt hexkey:${hexkey}
}
signature() {
string_to_sign | hmac_sha256 $(signature_key) | hex_key | sed "s/^.* //"
}
curl \
-T "${file}" \
-H "Authorization: AWS4-HMAC-SHA256 Credential=${JWT}/${date_scope}/${region}/s3/aws4_request,SignedHeaders=${signed_headers},Signature=$(signature)" \
-H "Host: $HOST" \
-H "Date: ${date_header}" \
-H "x-amz-acl: public-read" \
-H 'Content-Type: application/octet-stream' \
-H "x-amz-content-sha256: $(payload_hash)" \
-H "x-amz-date: ${iso_timestamp}" \
"http://$HOST/${prefix}/${NEW_NAME}"
Turns out my original thoughts regarding the Pydio Cells s3 buckets requiring an AWS account were wrong. Pydio Cells uses the same code or syntax (not sure 100%) that is used when working with AWS Buckets. The file system can be accessed using s3 buckets when working with the Pydio Endpoint https://demo.pydio.com/io. io is the s3 Bucket.
To test I am using Postman to first place a file named 'Query.sql' with content into the 'Personal Files' Workspace.
Authorization: AWS Signature
AccessKey: Token returned when using OpenID Connect. The "id_token" contained in the body.
SecretKey: The demo uses the key: 'gatewaysecret'
Advanced Options:
AWS Region: Default is 'us-east-1'. I didn't have to enter anything here but it still worked when I set it to 'us-west-1'.
Service Name: 's3' - I found that this is Required
Session Token: I left this blank.
Create files using PUT. Download files using GET.
PUT https://demo.pydio.com/io/personal-files/Query.sql
The below example shows how to first create a file and then pull it's content/download the file.
In my GET example I manually place a file named Query.sql onto the demo.pydio.com server in the Personal Files workspace. This example shows how to access the data and/or download the Query.sql file I manually placed into the Personal Files workspace.
GET https://demo.pydio.com/io/personal-files/Query.sql
I'm testing a PayPal ipn listener and it seems it NEVER returns VERIFIED status... Here's the script :
<?php
//read the post from PayPal system and add 'cmd'
$req = 'cmd=_notify-validate';
foreach ($_POST as $key => $value) {
$value = urlencode(stripslashes($value));
$req .= "&$key=$value";
}
//post back to PayPal system to validate
$header = "POST /cgi-bin/webscr HTTP/1.1\r\n";
$header .= "Content-Type: application/x-www-form-urlencoded\r\n";
$header .= "Host:www.sandbox.paypal.com\r\n";
$header .= "Connection: close\r\n";
$header .= "Content-Length: " . strlen($req) . "\r\n\r\n";
//$fp = fsockopen ('ssl://www.paypal.com', 443, $errno, $errstr, 30);
$fp = fsockopen ('ssl://www.sandbox.paypal.com', 443, $errno, $errstr, 30);
//
//error connecting to paypal
if (!$fp) {
//
}
//successful connection
if ($fp) {
fputs ($fp, $header . $req);
//while (!feof($fp)) {
$res = stream_get_contents($fp, 1024);
//$res = fgets ($fp, 1024);
$res = trim($res); //NEW & IMPORTANT
if (strcmp($res, "VERIFIED") == 0) {
// if status is COMPLETED insert order into database
if ($_POST['payment_status'] == "Completed") {
$subject = 'Instant Payment Notification - COMPLETED';
$to = 'my_email_address#gmail.com'; // your email
$body = "An instant payment notification was successfully recieved\n";
$body .= "from ".$_POST['payer_email']." on ".date('m/d/Y');
$body .= " at ".date('g:i A')."\n\nDetails:\n";
foreach ($_POST as $key => $value) { $body .= "\n".$key." : ".$value; }
mail($to, $subject, $body);
}
}
else {
//insert into DB in a table for bad payments for you to process later
$subject = 'Instant Payment Notification - ELSE status';
$to = 'my_email_address#gmail.com'; // your email
$body = "Else clause \n";
$body .= "from ".$_POST['payer_email']." on ".date('m/d/Y');
$body .= " at ".date('g:i A')."\n\nDetails:\n";
foreach ($_POST as $key => $value) { $body .= "\n".$key." : ".$value; }
mail($to, $subject, $body);
}
//}
fclose($fp);
}
?>
I've tried the code with while (!feof($fp)) and $res = fgets ($fp, 1024)
I never get the VERIFIED email and I'm even not sure if it is a coding issue... That's why I'm asking here : maybe some of you had this problem before and can help me. Thanks in advance.
OK, here is the $res posted back :
HTTP/1.1 200 OK
Date: Wed, 26 Feb 2014 12:03:23 GMT
Server: Apache
domain=.paypal.com; path=/; Secure; HttpOnly
Set-Cookie: cookie_check=yes; expires=Sat, 24-Feb-2024 12:03:24 GMT;domain=.paypal.com; path=/; Secure; HttpOnly
Set-Cookie: navcmd=_notify-validate; domain=.paypal.com; path=/; Secure; HttpOnly
Set-Cookie: navlns=0.0; expires=Fri, 26-Feb-2016 12:03:24 GMT; domain=.paypal.com; path=/; Secure; HttpOnly
Set-Cookie: Apache=10.72.109.11.1393416203743713; path=/; expires=Fri, 19-Feb-44 12:03:
Following is the sample code:
$req = "122";
$header = "POST /cgi-bin/webscr HTTP/1.1\r\n";
$header .= "Host: www.sandbox.paypal.com\r\n";
$header .= "Connection: close\r\n";
$header .= "Content-Type: application/x-www-form-urlencoded\r\n";
$header .= "Content-Length: " . strlen($req) . "\r\n\r\n";
$fp = fsockopen ('ssl://www.sandbox.paypal.com', 443, $errno, $errstr, 30);
if(!$errno)
{
var_dump($fp);
}
else
{
echo "ERROR: $errno, $errstr";
}
When I connect to the sandbox it produces me a connection timed out error:
ERROR: 110, Connection timed out
So I debugged the problem and found out it was something to do with the SSL and verified the existence and accessibility of openssl in the server.
Tested the openssl and connectivity on the server with the following
"openssl s_client -connect www.sandbox.paypal.com:443"
I get
"Socket: Connection Timed out"
I checked out iptables and there is no blocking for port 443
So I tried again to check the case for paypal.com, like the following:
$fp = fsockopen ('ssl://www.paypal.com', 443, $errno, $errstr, 30);
It worked immediately.
So I tried in my other server and I was able to connect to sandbox.paypal.com but not on the server where I need to demo and test payments.
My hosting provider is clueless and I am pulling my hair out here.
I appreciate any help on this.
Yes, ofcourse my first question on Stackoverflow :)
Here is the original code I had in my PHP script:
$header = "POST /cgi-bin/webscr HTTP/1.0\r\n";
$header .= "Content-Type: application/x-www-form-urlencoded\r\n";
$header .= "Content-Length: " . strlen($req) . "\r\n\r\n";
I got an email from Paypal saying that I needed to upgrade my IPN script so that it uses HTTP/1.1. So here is what I changed my code to, based on their directions:
$header .="POST /cgi-bin/webscr HTTP/1.1\r\n";
$header .="Content-Type: application/x-www-form-urlencoded\r\n";
$header .="Host: www.paypal.com\r\n";
$header .="Connection: close\r\n";
Payments have gone through today, but the IPN is no longer updating my database and this is the only change I made to it. Any ideas on what to do?
Thanks!
Yes. I have just done battle with this. What worked for me was to remove the connection close header, and add a trim to the response back from PP. Here are the headers:
$header = "POST /cgi-bin/webscr HTTP/1.1\r\n";
$header .= "Content-Type: application/x-www-form-urlencoded\r\n";
$header .= "Host: www.paypal.com\r\n";
$header .= "Content-Length: " . strlen($req) . "\r\n\r\n";
Here is the fsockopen:
$fp = fsockopen ('ssl://www.paypal.com', 443, $errno, $errstr, 30);
and here is the trim on the response back from PP:
if (!$fp) {
// HTTP ERROR
error_mail("Could not open socket");
//
} else {
fputs ($fp, $header . $req);
while (!feof($fp)) {
$res = trim(fgets ($fp, 1024));
}
//
// check the payment_status is Completed
// check that receiver_email is your Primary PayPal email
//
if ((strcmp ($res, "VERIFIED") == 0) && ($payment_status == "Completed") && ($receiver_email == $valid_receiver_email)) {
That worked for me.