Solution (in a way): I just added atime sleep although the half file still got corrupted but at least now i transfered all the 1.19 mb .
Removing completely base64 seems to be the answer :D
I am trying to create a script where a user can tranfer a wav file .
This is my current code both client side and server side . The problem is
that my data is corrupted at the end .
Also when i see all the data transfered i am closing the server
Ps. I that file is bad name variable but i am using it only now during the tests
Thanks in advance .
Client Side:
class SockThread(QtCore.QThread):
def create_r(self, filename):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.filename = filename
def run(self):
self.sock.connect(("127.0.0.1", 8000))
wf = open(self.filename, 'rb')
print("OK")
for line in wf:
self.sock.send(base64.b64encode(line))
wf.close()
print("OK")
Server Side:
file = open("sample.wav", "wb")
self.connection = True
while self.connection is True:
try:
data = self.socket.recv(1024)
print(data)
file.write(base64.b64decode(data))
except:
break
I think you need to read the entire file from the socket and then b64decode it, doing it in chunks of 1024 byte may not produce the same result.
Related
I started using sinatra,
Right now I'm using the following code to handle file downloads,
It works great for small files, but when it comes to large files > 500MB
The connection disconnects in the middle.
dpath = "/some root path to file"
get '/getfile/:path' do |path|
s = path.to_s
s.gsub!("-*-","/")
fn = s.split("/").last
s = dpath +"/"+ s
send_file s,:filename => fn
end
Two things:
What does your validate method do? If it's trying to open the file in memory, you might be running out of ram on your server (especially with large files).
Where are you setting fn ? It's a local variable inside the get scope and there's nothing setting it in your code example.
I'm trying to download and then save the contents to a xml file. The code I got is:
local filePath = currentDir().."/file.xml"
local http = require("socket.http")
local xFile = io.open(filePath, "w")
local save = ltn12.sink.file(xFile)
http.request{addr, sink = save }
print("Done!")
It runs but the file is still empty. Can I get some help here please?
Its a syntactical mistake. You mixed two styles of calling http.request. Use
http.request{url = addr, sink = save }
I wrote the file transferring code as follows:
val fileContent: Enumerator[Array[Byte]] = Enumerator.fromFile(file)
val size = file.length.toString
file.delete // (1) THE FILE IS TEMPORARY SO SHOULD BE DELETED
SimpleResult(
header = ResponseHeader(200, Map(CONTENT_LENGTH -> size, CONTENT_TYPE -> "application/pdf")),
body = fileContent)
This code works successfully, even if the file size is rather large (2.6 MB),
but I'm confused because my understanding about .fromFile() is a wrapper of fromCallBack() and SimpleResult actually reads the file buffred,but the file is deleted before that.
MY easy assumption is that java.io.File.delete waits until the file gets released after the chunk reading completed, but I have never heard of that process of Java File class,
Or .fromFile() has already loaded all lines to the Enumerator instance, but it's against the fromCallBack() spec, I think.
Does anybody knows about this mechanism?
I'm guessing you are on some kind of a Unix system, OSX or Linux for example.
On a Unix:y system you can actually delete a file that is open, any filesystem entry is just a link to the actual file, and so is a file handle which you get when you open a file. The file contents won't become unreachable /deleted until the last link to it is removed.
So: it will no longer show up in the filesystem after you do file.delete but you can still read it using the InputStream that was created in Enumerator.fromFile(file) since that created a file handle. (On Linux you actually can find it through the special /proc filesystem which, among other things, contains the filehandles of each running process)
On windows I think you will get an error though, so if it is to run on multiple platforms you should probably check test your webapp on windows as well.
I'm doing a code that will transfer files between two computers. I'm using tcp socket for the connection. The thing is I need to attach sort of headers to the file bytes that I'm sending so the receiveer know that what I'm sending is part of a file. Let's say my header is data. The string I'll send will be: data <file bytes>.
I'm able to send them and the receiver is able to receive them but the file seems corrupted. Though for unformatted text files it works well but for other files it doesn't seem to parse the file efficiently.
while(1){
fp = (char*) malloc (56);
rc = recv(connfd,fp,55,0);
if(strcmp(fp,"stop") == 0){
break;
}
fp = fp + 5; //I do this to skip the 'data<space>" header
wr = write(fd,pf2,rc-5);
tot = tot + wr;
printf("Received a total of %d bytes rc = %d \n",tot, rc);
}
But I've tried sending the file without the header and I get the file uncorrupted but I need to use those 'data' headers for this particular code. What am I doing wrong?
fp = fp + 5; //I do this to skip the 'data<space>" header
But you don't receive the data<space> header in every receive() call. You have to keep a buffer to which you add all data you receive, until you encounter another "data<space>".
Please note though that separators are generally a bad idea. What if you send a file that has the string "data<space>" in it? Your client will assume that after that, a new file will be sent, while in fact you're still receiving the original file.
Try to send some kind of message-length-header, for example an uint32, which occupies four bytes before each file you send. You can then read the first four bytes and then you know how many more bytes you can expect for that file.
I'm having a problem with serial IO under both Windows and Linux using pySerial. With this code the device never receives the command and the read times out:
import serial
ser = serial.Serial('/dev/ttyUSB0',9600,timeout=5)
ser.write("get")
ser.flush()
print ser.read()
This code times out the first time through, but subsequent iterations succeed:
import serial
ser = serial.Serial('/dev/ttyUSB0',9600,timeout=5)
while True:
ser.write("get")
ser.flush()
print ser.read()
Can anyone tell what's going on? I tried to add a call to sync() but it wouldn't take a serial object as it's argument.
Thanks,
Robert
Put some delay in between write and read
e.g.
import serial
ser = serial.Serial('/dev/ttyUSB0',9600,timeout=5)
ser.flushInput()
ser.flushOutput()
ser.write("get")
# sleep(1) for 100 millisecond delay
# 100ms dely
sleep(.1)
print ser.read()
Question is really old, but I feel this might be relevant addition.
Some devices (such as Agilent E3631, for example) rely on DTR. Some ultra-cheap adapters do not have DTR line (or do not have it broken out), and using those, such devices may never behave in expected manner (delays between reads and writes get ridiculously long).
If you find yourself wrestling with such a device, my recommendation is to get an adapter with DTR.
This is because pyserial returns from opening the port before it is actually ready. I've noticed that things like flushInput() don't actually clear the input buffer, for example, if called immediately after the open(). Following is code to demonstrate:
import unittest
import serial
import time
"""
1) create a virtual or real connection between COM12 and COM13
2) in a terminal connected to COM12 (at 9600, N81), enter some junk text (e.g.'sdgfdsgasdg')
3) then execute this unit test
"""
class Test_test1(unittest.TestCase):
def test_A(self):
with serial.Serial(port='COM13', baudrate=9600) as s: # open serial port
print("Read ASAP: {}".format(s.read(s.in_waiting)))
time.sleep(0.1) # wait for 100 ms for pyserial port to actually be ready
print("Read after delay: {}".format(s.read(s.in_waiting)))
if __name__ == '__main__':
unittest.main()
"""
output will be:
Read ASAP: b''
Read after delay: b'sdgfdsgasdg'
.
----------------------------------------------------------------------
Ran 1 test in 0.101s
"""
My workaround has been to implement a 100ms delay after opening before doing anything.
Sorry that this is old and obvious to some, but I didn't see this option mentioned here. I ended up calling a read_all() when flush wasn't doing anything with my hardware.
# Stopped reading for a while on the connection so things build up
# Neither of these were working
conn.flush()
conn.flushInput()
# This did the trick, return value is ignored
conn.read_all()
# Waits for next line
conn.read_line()