filezilla, error while writing failure - file-transfer

I'm transferring a very large (35GB) file through SFTP and FileZilla.
Now the transfer is 59.7% done, but I keep getting this error, and it hasn't changed that number for hours.
Error: File transfer failed after transferring 1,048,576 bytes in 10 seconds
Status: Starting upload of C:\Files\static.sql.gz
Status: Retrieving directory listing...
Command: ls
Status: Listing directory /var/www/vhosts/site/httpdocs
Command: reput "C:\Files\static.sql.gz" "static.sql.gz"
Status: reput: restarting at file position 20450758656
Status: local:C:\Files\static.sql.gz => remote:/var/www/vhosts/site/httpdocs/static.sql.gz
Error: error while writing: failure
Why do I keep getting this error?

Credit to cdhowie: The remote volume was out of space.

I encountered the same situation.
Go to your server, run "df" command to see if there is a problem of hard disk space.

http://wiki.filezilla-project.org/Network_Configuration#Timeouts_on_large_files

Recently faced this issue, Turned out to be the disk space issue. Removed some old logs, specially mysqld.log file which was in GBs. It worked after that.

In our case it was because the file exceeded the user's quota. We use Virtualmin and the virtual server had a default quota of just 1GB. Increasing that value in Virtualmin solved the problem.

filezilla, error while writing: failure issue occurred when server storage is full. Login in Linux server and
Kindly run below two commands to find out which files are consuming max storage in /var/log recursively..
for MB Size:
sudo du -csh $(sudo find /var/log -type f) |grep M|sort -nr
For GB size:
sudo du -csh $(sudo find /var/log -type f) |grep G|sort -nr

This happened when I tried to replace the file which was already open or running in the background. Once closed, I was able to overwrite the file.

Related

Unable to connect to SSH neither throught cloud shell or SCP

Before this error happens:
I have a VM, I tried to change the permission of all folders to 777, in order to get past an error from data transfer to Cloud Run.
leads to "sudo: /etc/sudo.conf is world writable sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set" when I use SSH
I fixed it by mounting this infected disk to a temp Instance , changed it back with "chmod 755 /etc/sudo.conf
chmod 4755 /usr/bin/sudo"
Now,
I have 2 problems.
I still not able to connect to SSH.
I tried troubleshooting and all ticks are green, and I did not have an IAP problem before.
Nor FTP (I use Puttygen to create a private key then update VM's meta)
the 20 GB disk became 65 GB. Is this what caused the problem? anyway to revert back to 20GB without damaging the disk
Right now, I can still access the site, it runs fine. https://www.nasavape.com

Zookeeper: java.io.IOException: No snapshot found, but there are log entries. Something is broken

I have been working with Kafka 2.4.0 (2.11) and yesterday I had to forcefully terminate the process for some unknown reason. Since then I haven't been unable to start Zookeeper due to the following error:
[2020-01-11 11:12:43,783] ERROR Unexpected exception, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
java.io.IOException: No snapshot found, but there are log entries. Something is broken!
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:222)
at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:240)
at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:290)
at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:450)
at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:764)
at org.apache.zookeeper.server.ServerCnxnFactory.startup(ServerCnxnFactory.java:98)
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:144)
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:106)
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:64)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:128)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
And as soon as I searched for this problem I found issue ZOOKEEPER-3513 reported, which may or may not explain the problem. However, what I'm finding strange is that if I delete the Kafka/Zookeeper directory and download it again from scratch, the problem persists. Does anyone know how I can solve this?
Thank you for your help
Check for the tmp/zookeeper folder on the drive where you have kafka folder (lets say D:/), and delete the folder tmp, which will create automatically for you once run the zookeeper again.
Try changing your zookeeper data directory.
Your zookeeper data directory is defined in zookeeper.properties (I think the default is /tmp/zookeeper).
Perhaps you're not deleting the correct zookeeper directory?
I had the same problem, and this solution worked.
NOTE: I'm experimenting with Kafka, and not using it in production. I have no idea what else the above does, apart from fix this error...
I've faced the same issue with Zookeeper after updating from version 3.4.x to 3.5.6. As described here. I've:
added empty snapshot.0 file in data directory
added a property 'zookeeper.snapshot.trust.empty=true' to Zookeeper configuration file (default is zoo.cfg)
On windows ->
Go to the tmp folder where the zookeeper details are stored
and delete the existing log files
Directory path = d:\tmp\zookeeper\version-2
On Linux ->
Path = /tmp/zookeeper/version-2
And remove all the existing log files using rm -r log.1
The log files will be created automatically again and will resolve the issue.
Faced same issue in macOS.
Solution: In kafka dir, path cd /tmp/zookeeper/version-2 deleted the log.1 file. It worked for me
if you are on windows make sure you escape the location of the zookeeper temp directory.
dataDir=d:\tmp\zookeeper
Created a new dir for logs and configured the same path in zoo.cfg.
It worked:)
I use macOS and my solution was to delete everything in the dataDir, the default value should be /usr/local/var/lib/zookeeper.
For those who are using docker, I'll share my experience:
I've been running zookeeper confluentinc/cp-zookeeper:5.2.1 as it follows:
docker run \
--network kafka-net --name=zookeeper \
-e ALLOW_ANONYMOUS_LOGIN=yes \
-e ZOOKEEPER_CLIENT_PORT=2181 \
-v /tmp/zookeeper-data:/var/lib/zookeeper/data \
-v /tmp/zookeeper-txn-logs:/var/lib/zookeeper/log \
-p 2181:2182 confluentinc/cp-zookeeper:5.2.1
As expected, I can see a few files placed in /tmp/zookeeper-txn-logs and /tmp/zookeeper-data on host. After cleaning up /tmp/zookeeper-data and running again, I've got the error No snapshot found, but there are log entries.
In my case, I just had to purge the data on /tmp/zookeeper-txn-logs. For a dev/production environment, I'd recommend following the docs https://access.redhat.com/documentation/en-us/red_hat_amq/6.3/html/fabric_guide/ensemble-purgetxnlog

zookeeper + Kafka - Unable to create data directory

I´m using zookeeper 3.4.8 in single node and try to use kafka.
When I run this command:
zookeeper-server-start.sh /usr/local/kafka_2.9.2-0.8.2.2 /config/zookeeper.properties
I get the below error:
[2016-02-22 17:32:41,661] ERROR Unexpected exception, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
java.io.IOException: Unable to create data directory /var/zookeeper/version-2
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:85)
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:104)
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86)
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:52)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
Any advice?
One reason could be the inappropriate path specified to zoo.config file.
A lot of solutions on the web specifies the path as ":\zookeeper-3.4.7\data".
Instead of the above mentioned format, specify the address as full path from your C: drive to the data folder. It worked for me. (Don't forget to put double slash \ instead of one in case you're on windows)
I got this problem for this setting on Windows PC:
dataDir=c:/data/zoo/
and thus this error:
2016-12-02 15:29:25,327 [myid:] - ERROR [main:ZooKeeperServerMain#64] - Unexpected exception, exiting abnormally
java.io.IOException: Unable to create data directory ??:\data\zoo\version-2
Problem was solved by changing (I have ZooKeeper on C disk unpackaged)
dataDir=/data/zoo/
Also run command line tool as Administrator if needed
I faced the same issue, and this works with
sudo bin/zookeeper-server-start.sh config/zookeeper.properties
You probably don't have permission to write to the directory log.dirs (see zookeeper.properties). Change the directory to a different one, change the permission setting of the current log.dirs directory or run Kafka as different user. You can use the command ls -l /var/zookeeper to see the current permissions and then chmod to change the permissions.
The reason is that zookeeper has no permission. Trying to use the administrator role to install it.
For window's machine
Solved : Use double slashes inside the path while defining the dataDir path
dataDir=E:\\tools\\zookeeperdata\\data
And in my windows 10 system, using zookeeper 3.4.10. the dataDir attribute should setting like :\\\\zookeeper\\\\data, not d:\zookeeper\data. it also can setting as linux file system separator(d:/zookeeper/data). then this problem should be ok. And in linux, I think it permission problem. also it can come across when dataDir is under driver C in windows system.
If you're running the zookeeper in the Windows 10 machine we need to specify the dataDir property something like this
"dataDir=C:\zookeeper-3.4.13\data"
In my windows 10 system, using zookeeper 3.4.13, the following example path is working:
"dataDir=C:\\dev\\tools\\zookeeper-3.4.13\\data"
You have to use double backslashes.
on zoo.cfg you need to change directory to above or anything similar:
dataDir=C:/zookeeper-3.4.14/zookeeper-3.4.14/data
For windows, set dataDir to full path where you have no access restrictions - with no quotes("")
dataDir=C:\\your-path\
dataDir=C:\\zk\tmp\
Note: I have observed the command to fail for some of the path(though full access) and running command prompt as administrator has solved it.
For windows the below too works:
dataDir=C:\\zookeeper-3.4.14\\zookeeper-3.4.14\\data

Windows could not start mongodb service on local computer. For more info., review the System Event Log

I am using Windows 32-bit machine and tried to start MongoDB service from Windows > services as shown below.
However, I am unable to start the MongoDB service from it and throws the following error.
When I try using cmd prompt, I am getting the following error:
Network Failed to connect to 127.0.0.1:27017, reason: errno:10061 No
connection could be made because the target machine actively refused
it.
Error: Couldn't connect to server 127.0.0.1:27017 <127.0.0.1>,
connection attempt failed.
I had same an issue.
Try to remove mongod.lock file from your Mongo data directory.
For example mine is "C:\Program Files\MongoDB\Data\mongod.lock" and after deleting file start the MongoDB service and it's work like charm.
In case someone else is running into this problem, just read your Log files and you will be able to find the problem, for me after trying to install it inside wamp directory when I run the MongoDB service it gave me the same error message, I went to the logs and find out that I was missing a directory inside my data directory which is called db, once I have created this directory the service run perfectly.
MongoDB uses a default folder to store its files. On Windows, the default location is C:\data\db.
Maybe that folder doesn´t exist. In that case just creat it or change the default location of Mongo service using the --dbpath command-line flag.
So I just had the same problem, running on Windows 10. The reason why MongoDB didn't start was because the path to the data and logs was not correctly set. This has already been pointed out, but my solution is different. Look in C:\Program Files\MongoDB\Server\4.0\bin (or wherever your mongoDB is installed). There is a config file called mongod.cfg. Check that
storage:
dbPath:
and
systemLog:
path:
Is set to what you want. In my case, it was using environment variables %MONGODBPATH% or similar that was not set by Windows. By default, the log and data should point to C:\Program Files\MongoDB\Server\4.0\data and C:\Program Files\MongoDB\Server\4.0\log\mongod.log respectively.
There was a npm: in last line of mongodb configuration file which is located in the installation folder in the bin\mongod.cfg
I commented out that line and started the service and it is working like charm.
I concluded this by running the mongodb service command from windows command line(cmd) and I got an error.
I ran this to spot the error:
C:\Program Files\MongoDB\Server\4.2\bin\mongod.exe --config "C:\Program Files\MongoDB\Server\4.2\bin\mongod.cfg" --service
mongod.lock deletion did not helped me, repair did not help either. In my case it was due to one of database happened to be corrupted, I moved all dbs to another directory and then copied them back one by one and re-starting mongodb service to figure out what db file is corrupted. It's definitely MongoDb bug
I had the same error message. Try to locate the mongodb log files and look at the last entries. My issue was clearly stated there, a missing directory :
2019-01-29T16:59:44.424+0100 I STORAGE [initandlisten] exception in
initAndListen: NonExistentPath: Data directory
C:\wamp64\bin\mongodb\mongodb-win32-x86_64-2008plus-ssl-3.6.10\data\db
not found., terminating
The advice of checking the log was what helped me. In this case:
The MongoDB service could not be started. A service specific error occurred: 100
turns out I had a problem with some databases created with WiredTiger while the mongod.cfg specified engine was: mmapv1
So I basically removed the content of the folder c:/data/db/ and then used the command net start MondoBD --repair and worked. Uffff it´s been 2 days.
I'm here a bit late, very late actually. But may it works something out for the ones facing this issue now. Mongodb configuration file in Windows OS is under 'C:\Program Files\MongoDB\Server\%YOUR MONGO VERSION%'.
I had changed this file and manipulated the bindip field, so I was getting the same error. It should be 127.0.0.1 or your machine's IP address which you can find it by 'ipconfig/all' command in cmd. So I fixed bindip and the service starts with no problems.
stuck on the same issue, but got the solution by hit and trial, just create a new folder for path "C:\data\db" then go to your command prompt and type 'mongod', your database server will start.
For me it was a port problem :
just search and kill the process using the port 27017
for linux : https://bobcares.com/blog/mongodb-error-code-48/
for windows : How do I kill the process currently using a port on localhost in Windows?
I have found out that Visual C++ Redistributable was missing in my Windows 7 Machine. After installing it worked.
For Windows 10 users
specify database location, if don't know create the below-mentioned directory and always use this
open cmd
mkdir C:\users\{username}\data
cd C:\users\{username}\data
mongod --dbpath .
start mongodb server
open cmd
mongod --dbpath C:\users\{username}\data
stop mongodb
open cmd
mongo
if server is running, run:
use admin
db.shutdownServer()
quit()
In my case, this happened because I did not stop MongoDB from docker. after I stopped the process the error was gone.
In my case, it was the docker with MongoDB running on the same port. So after I stopped the container, the service is then successfully starting.

Moving postgresql data cluster

Our postgres data folder was installed on a drive with very limited space. I'm now trying to move it over to a newly mounted drive (more space). I've followed several blog posts and they all say...
stop service
copy data cluster
update postgresql-9.1 file (PGDATA=)
restart service
The service starts but when I go to connect, it gives me "could not connect to server: Connection refused"
I tried telnet-ing to port 5432 and nothing.
Here is the link to what I've been trying:
http://www-01.ibm.com/support/docview.wss?uid=swg21324272
Thanks everyone for your help. Looks like the problem was with permissioning.
Instead of doing
cp -R fromfolder tofolder
I did
cp -a fromfolder tofolder
And that solved it. Thanks all.