PhpStorm FTP 425 Unable to build data connection: Cannot assign requested address - sockets

PhpStorm FTP upload failed.
[17-1-16 下午5:17] Failed to transfer file '/a': cant open output connection for file "ftp://192.168.1.229:21/a". Reason: "425 Unable to build data connection: Cannot assign requested address".
[17-1-16 下午5:17] Upload to server completed in less than a minute: 108 files transferred, 3 items failed (541.1 Kb/s)
My PhpStorm running on deepin system (Linux)
net.ipv4.tcp_fin_timeout=30
net.ipv4.tcp_timestamps=1
net.ipv4.tcp_tw_recycle=1
net.ipv4.ip_local_port_range = 10000 65000
I tried to change the number of ports, but still failed to upload.
Who can help me ?

I got the same problem in IntelliJ IDEA.
Changing the connection type from FTP to SFTP helped. I hope it's also possible for you.

Related

FTPD Server Issue

So I am trying to use my xampp server and for the life of me can't understand why my ProFTPD will not turn on. It only became cause for concern when I saw the word "bogon" in the application log. Can anyone translate to me what the application log means and maybe how I go about troubleshooting the problem ?
Stopping all servers...
Stopping Apache Web Server...
/Applications/XAMPP/xamppfiles/apache2/scripts/ctl.sh : httpd stopped
Stopping MySQL Database...
/Applications/XAMPP/xamppfiles/mysql/scripts/ctl.sh : mysql stopped
Starting ProFTPD...
Exit code: 8
Stdout:
Checking syntax of configuration file
proftpd config test fails, aborting
Stderr:
bogon proftpd[3948]: warning: unable to determine IP address of 'bogon'
bogon proftpd[3948]: error: no valid servers configured
bogon proftpd[3948]: Fatal: error processing configuration file '/Applications/XAMPP/xamppfiles/etc/proftpd.conf'

IBM BLUEMIX BLOCKCHAIN SDK-DEMO failing

I have been working with HFC SDK for Node.js and it used to work, but since last night I am having some problems.
When running helloblockchain.js only few times works, most time I get this error when it tries to enroll a new user:
E0113 11:56:05.983919636 5288 handshake.c:128] Security handshake failed: {"created":"#1484304965.983872199","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484304965.983866102","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
Error: Failed to register and enroll JohnDoe: Error
Other times, the enroll works and the failure appears deploying the chaincode:
Enrolled and registered JohnDoe successfully
Deploying chaincode ...
E0113 12:14:27.341527043 5455 handshake.c:128] Security handshake failed: {"created":"#1484306067.341430168","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484306067.341421859","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
Failed to deploy chaincode: request={"fcn":"init","args":["a","100","b","200"],"chaincodePath":"chaincode","certificatePath":"/certs/peer/cert.pem"}, error={"error":{"code":14,"metadata":{"_internal_repr":{}}},"msg":"Error"}
Or:
Enrolled and registered JohnDoe successfully
Deploying chaincode ...
E0113 12:15:27.448867739 5483 handshake.c:128] Security handshake failed: {"created":"#1484306127.448692244","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484306127.448668047","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
events.js:160
throw er; // Unhandled 'error' event
^
Error
at ClientDuplexStream._emitStatusIfDone (/usr/lib/node_modules/hfc/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._readsDone (/usr/lib/node_modules/hfc/node_modules/grpc/src/node/src/client.js:158:8)
at readCallback (/usr/lib/node_modules/hfc/node_modules/grpc/src/node/src/client.js:217:12)
E0113 12:15:27.563487641 5483 handshake.c:128] Security handshake failed: {"created":"#1484306127.563437122","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484306127.563429661","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
This code worked yesterday, so I don't know what could be happening.
Does anybody know how can I fix it?
Thanks,
Javier.
ibm-bluemix
blockchain
These types of intermittent issues are usually related to GRPC. An initial suggestion is to ensure that you are using at least GRPC version 1.0.0.
If you are using a Mac, then the maximum number of open file descriptors should be checked (using ulimit -n). Sometimes this is initially set to a low value such as 256, so increasing the value could help.
There are a couple of GRPC issues with similar symptoms.
https://github.com/grpc/grpc/issues/8732
https://github.com/grpc/grpc/issues/8839
https://github.com/grpc/grpc/issues/8382
There is a grpc.initial_reconnect_backoff_ms property that is mentioned in some of these issues. Increasing the value past the 1000 ms level might help reduce the frequency of issues. Below are instructions for how the helloblockchain.js file can be modified to set this property to a higher value.
Open the helloblockchain.js file in the Hyperledger Fabric Client example and find the enrollAndRegisterUsers function.
Add “grpc.initial_reconnect_backoff_ms": 5000 to the setMemberServicesUrl call.
chain.setMemberServicesUrl(ca_url, {
pem: cert, "grpc.initial_reconnect_backoff_ms": 5000
});
Add “grpc.initial_reconnect_backoff_ms": 5000 to the addPeer call.
chain.addPeer("grpcs://" + peers[i].discovery_host + ":" + peers[i].discovery_port,
{pem: cert, "grpc.initial_reconnect_backoff_ms": 5000
});
Note that setting the grpc.initial_reconnect_backoff_ms property may reduce the frequency of issues, but it will not necessarily eliminate all issues.
The connection to the eventhub that is made in the helloblockchain.js file can also be a factor. There is an earlier version of the Hyperledger Fabric Client that does not utilize the eventhub. This earlier version could be tried to determine if this makes a difference. After running git clone https://github.com/IBM-Blockchain/SDK-Demo.git, run git checkout b7d5195 to use this prior level. Before running node helloblockchain.js from a Node.js command window, the git status command can be used to check the code level that is being used.

How to configure open-fire server with HttpUploadComponent for offline file transferring?

I use Openfire with Conversations and would like to implement offline file transferring with HttpUploadComponent, I have copied httpupload folder inside openfire folder as below screenshot:
Then I did below configurations in openfire:
I also installed Python and configured config.yml file in httpupload folder like below:
component_jid: upload.192.168.105.164
component_secret: 1234
component_port: 5275
storage_path : ./var/lib/httpupload/
max_file_size: 20971520 #20MiB
http_address: 0.0.0.0 #use 0.0.0.0 if you don't want to use a proxy
http_port: 8080
get_url : http://192.168.105.164:8080/
put_url : http://192.168.105.164:8080/
expire_interval: 82800 #time in secs between expiry runs (82800 secs = 23 hours). set to '0' to disable
expire_maxage: 2592000 #files older than this (in secs) get deleted by expiry runs (2592000 = 30 days)
user_quota_hard: 104857600 #100MiB. set to '0' to disable rejection on uploads over hard quota
user_quota_soft: 78643200 #75MiB. set to '0' to disable deletion of old uploads over soft quota an expiry runs
allow_web_clients: true #answer OPTIONS requests to allow web clients to upload files
I did run Httpupload server as well :
After starting python server, if you go openfire\serversetting\external components*view the external components* [in the first line], you'll see whether session is created or not:
After all of this, when I want to send a file from android client its failling and It gives me this error:
Where is my problem? Thanks.
In attached error screenshot, the last word is 403, which is indicating that it's related to authorization on HttpUploadComponent end.
Now I started to check the code of this component and on line 83 of https://github.com/siacs/HttpUploadComponent/blob/master/httpupload/server.py it is picking the variable "storage_path" from configuration to place the file in that directory.
Now as mentioned in your question, you have set storage_path : ./var/lib/httpupload/
But you are on a windows machine and this path is invalid.
Try giving a valid windows os path.

Scalding Tutorial: HDFS rsync errors

Please help to understand output of unsucessfull Scalding run on Hadoop.
I got latest Scalding distribution from git:
git clone https://github.com/twitter/scalding.git
After sbt assembly from scalding directory I tried to run tutorial with command:
scripts/scald.rb --hdfs tutorial/Tutorial0.scala
As a result I got the following errors:
scripts/scald.rb:194: warning: already initialized constant SCALA_LIB_DIR
rsyncing 19.8M from scalding-core-assembly-0.10.0.jar to my.host.here in background...
downloading hadoop-core-1.1.2.jar from http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-core/1.1.2/hadoop-core-1.1.2.jar...
ssh: Could not resolve hostname my.host.here: Name or service not known
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]
Successfully downloaded hadoop-core-1.1.2.jar!
downloading commons-codec-1.8.jar from http://repo1.maven.org/maven2/commons-codec/commons-codec/1.8/commons-codec-1.8.jar...
Successfully downloaded commons-codec-1.8.jar!
downloading commons-configuration-1.9.jar from http://repo1.maven.org/maven2/commons-configuration/commons-configuration/1.9/commons-configuration-1.9.jar...
Successfully downloaded commons-configuration-1.9.jar!
downloading jackson-asl-0.9.5.jar from http://repo1.maven.org/maven2/org/codehaus/jackson/jackson-asl/0.9.5/jackson-asl-0.9.5.jar...
Successfully downloaded jackson-asl-0.9.5.jar!
downloading jackson-mapper-asl-1.9.13.jar from http://repo1.maven.org/maven2/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar...
Successfully downloaded jackson-mapper-asl-1.9.13.jar!
downloading commons-lang-2.6.jar from http://repo1.maven.org/maven2/commons-lang/commons-lang/2.6/commons-lang-2.6.jar...
Successfully downloaded commons-lang-2.6.jar!
downloading slf4j-log4j12-1.6.6.jar from http://repo1.maven.org/maven2/org/slf4j/slf4j-log4j12/1.6.6/slf4j-log4j12-1.6.6.jar...
Successfully downloaded slf4j-log4j12-1.6.6.jar!
downloading log4j-1.2.15.jar from http://repo1.maven.org/maven2/log4j/log4j/1.2.15/log4j-1.2.15.jar...
Successfully downloaded log4j-1.2.15.jar!
downloading commons-httpclient-3.1.jar from http://repo1.maven.org/maven2/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar...
Successfully downloaded commons-httpclient-3.1.jar!
downloading commons-cli-1.2.jar from http://repo1.maven.org/maven2/commons-cli/commons-cli/1.2/commons-cli-1.2.jar...
Successfully downloaded commons-cli-1.2.jar!
downloading commons-logging-1.1.1.jar from http://repo1.maven.org/maven2/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar...
Successfully downloaded commons-logging-1.1.1.jar!
downloading zookeeper-3.3.4.jar from http://repo1.maven.org/maven2/org/apache/zookeeper/zookeeper/3.3.4/zookeeper-3.3.4.jar...
Successfully downloaded zookeeper-3.3.4.jar!
compiling tutorial/Tutorial0.scala
scalac -classpath /tmp/temp_scala_home_2.9.3_654763/scala-library-2.9.3.jar:/tmp/temp_scala_home_2.9.3_654763/scala-compiler-2.9.3.jar:/home/test/Cascading/scalding/scalding-core/target/scala-2.9.3/scalding-core-assembly-0.10.0.jar:/tmp/maven/hadoop-core-1.1.2.jar:/tmp/maven/commons-codec-1.8.jar:/tmp/maven/commons-configuration-1.9.jar:/tmp/maven/jackson-asl-0.9.5.jar:/tmp/maven/jackson-mapper-asl-1.9.13.jar:/tmp/maven/commons-lang-2.6.jar:/tmp/maven/slf4j-log4j12-1.6.6.jar:/tmp/maven/log4j-1.2.15.jar:/tmp/maven/commons-httpclient-3.1.jar:/tmp/maven/commons-cli-1.2.jar:/tmp/maven/commons-logging-1.1.1.jar:/tmp/maven/zookeeper-3.3.4.jar -d /tmp/script-build tutorial/Tutorial0.scala
ssh: Could not resolve hostname my.host.here: Name or service not known
rsyncing 1.5K from job-jars/Tutorial0.jar to my.host.here in background...
Waiting for 2 background threads...
ssh: Could not resolve hostname my.host.here: Name or service not known
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6]
Could not rsync: /home/test/Cascading/scalding/scalding-core/target/scala-2.9.3/scalding-core-assembly-0.10.0.jar to my.host.here:scalding-core-assembly-0.10.0.jar
Could not rsync: /tmp/Tutorial0.jar to my.host.here:job-jars/Tutorial0.jar
* Update *
After changing host in scald.rb I get the followng authentication problem:
$ scripts/scald.rb --hdfs tutorial/Tutorial0.scala
scripts/scald.rb:194: warning: already initialized constant SCALA_LIB_DIR
rsyncing 19.8M from scalding-core-assembly-0.10.0.jar to node7.test.net in background...
The authenticity of host 'node7.test.net (10.1.21.32)' can't be established.
RSA key fingerprint is fa:41:31:ab:b0:46:08:8f:2b:75:0a:18:24:f9:d5:ec.
Are you sure you want to continue connecting (yes/no)? The authenticity of host 'node7.test.net (10.1.21.32)' can't be established.
RSA key fingerprint is fa:41:31:ab:b0:46:08:8f:2b:75:0a:18:24:f9:d5:ec.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node7.test.net' (RSA) to the list of known hosts.
test#node7.test.net's password: Please type 'yes' or 'no':
Permission denied, please try again.
test#node7.test.net's password:
I enter correct pathword, but the authentication error persists. How should I configure rsync?
You did change this
https://github.com/twitter/scalding/blob/develop/scripts/scald.rb#l27
right?
The default host is: my.host.here.

wget cannot locate file in different domain

I'm using wget to download file with the recursive option (-r) and enabling the recursion to go to other domains (-H). When it cannot find some file, wget just keeps trying
--2013-05-05 13:38:52-- http://clnet.ucla.edu/robots.txt
Resolving clnet.ucla.edu... 128.97.168.150
Connecting to clnet.ucla.edu|128.97.168.150|:80... failed: Connection timed out.
Retrying.
--2013-05-05 13:39:14-- (try: 2) http://clnet.ucla.edu/robots.txt
Connecting to clnet.ucla.edu|128.97.168.150|:80... failed: Connection timed out.
Retrying.
down to try 10, and so on. Is there a way to tell wget just to ignore this file?