I have a Golang web server that I have written to handle large file uploads 30GB or more. In a proof of concept using Dropzone.js I can upload files of any size with no issue as long as they are chunked.
The way DropzoneJS.js implemented this is that each chunk has items added the to the header like:
dzchunkindex: 435
dzchunksize: 10000
dztotalchunkcount: 3498274
So I receive a chunk, I create the file (if needed), write the data, and check to see if I'm on the last chunk. Then repeat as needed. Once I see I've written the last chunk I close the file.
It seems like Alamofire supports chunked uploads using its AF.Upload method.
However, how should my server know when the last chunk has been uploaded? I can certainly check this a different way. Just curious what that way should be? Ive combed over the Alamofire docs and can't find much.
I can chunk the file manually and upload it but id rather use Alamofire if possible.
Thanks,
Ed
Related
My question at Github
https://github.com/googleapis/python-speech/issues/52
has been active for 9 days and the only two people to have attempted an answer have both failed but now I think it might be possible for someone to answer it who understands how Google Cloud Buckets work even though they do not understand how Google's Speech Api works. In order to convert long audio files to text they first must be uploaded to the Cloud. I was using some syntax that now appears to be broken and the following syntax might work except that Google does not explain how to use this code in coordination with files uploaded to the Cloud. So in the code below published here:
https://cloud.google.com/speech-to-text/docs/async-recognize#speech_transcribe_async-python
The content object has to be located on the cloud and it needs to be a bytes object. Suppose the address of the object is: gs://audio_files/cool_audio
What syntax would I use such that the content object refers to a bytes object?
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
client = speech.SpeechClient()
audio = types.RecognitionAudio(content=content)
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=16000,
language_code='en-US')
operation = client.long_running_recognize(config, audio)
print('Waiting for operation to complete...')
response = operation.result(timeout=90)
My previous answer didn't really address your question. Let me try again:
Please try this:
audio = types.RecognitionAudio(content=bytes(content, 'utf-8'))
GCS stores objects as a sequence of bytes. If your object has a Content-Encoding header that can cause the content to be transformed while downloading (e.g., gzip content will be uncompressed if the client doesn't supply an Accept-Encoding: gzip header); and if it has a Content-Type header the client application or library may treat the information differently.
I want to copy a file from the client to the server through the rest end point exposed by the server. I referred the various questions and answers in stackoverflow but I could not get a clear picture of it.
I just want a sample client and server code in golang to copy the file from client and save it on the server.
Thanks in advance.
Direction: Server to Client
So -- both sides are in Go? Okay, let's start with the server side. See my WebLoad.go file from my CSVStorageServer server: (Link to Github)
At line 17, I define the handler for the web server. This method will build a zip file on-demand and send it to the browser. The important part regarding to your question are line 77 up to 82. Here, I set the headers for the client, e.g. content length and type. Line 82 sends the whole data to the client side. It copies the bytes from the on-demand zip file to the wire.
On the client side, you trigger e.g. a GET request and store the result. Here an example: https://golang.org/pkg/net/http/#example_Get
With http.Get(... you trigger the GET request. With ioutil.ReadAll(res.Body) you read all bytes from the server and store it to a variable. Afterwards, you could write the bytes to the disk or process it in-memory.
I hope, this answer helps you.
Best regards, Thorsten
Edit #1:
Regarding the REST end-point, cf. the server definition (link to Github). Line 16 defines the REST end-point for this handler. In this case, it gets available as /load. You could use any REST-like path here, e.g. /open/file/USERID/send, etc.
Direction: Client to Server
In order to copy a file from client to server side, similar operations are necessary. On the client side, a POST request is necessary as multipart/form-data. Here is a good example for this: Link to a blog post. This example considers also the server part. The relevant client part is the function func postFile(filename string, targetUrl string) error { ... }.
For the server part, here an own example: Link to Github. This example receives an file from the client and writes it to a MongoDB database. The relevant parts are:
Line 39 read the file from the client: file, fileHeader, fileError := request.FormFile("file") The result is a handle to this uploaded file.
Line 60 copies all bytes from the source (browser or Go client) into a destination (here, the MongoDB): _, errCopy := io.Copy(newFile, file).
Edit #2:
Here is a full working example: https://github.com/SommerEngineering/Example010 where client and server are in the same program. It should be easy to split it into two programs.
OK, here's a goal I've been looking for a while.
As it's known, most advertising and analytics companies use a so called "pixel" code in order to track websites views, transactions, conversion etc.
I do have a general idea on how it works, the problem is how to implement it. The tracking codes consist from few parts.
The tracking code itself.
This is the code that the users inserts on his webpage in the <head> section. The main goal of this code is to set some customer specific variables and to call the *.js file.
*.js file.
This file holds all the magic of CRUD (create/read/update/delete) cookies, track user's events and interaction with the webpage.
The pixel code.
This is an <img> tag with the src atribute pointing to an image *.gif (for example) file that takes all the parameters collected on the page, and stores them in the database.
Example:
WordPress pixel code: <img id="wpstats" src="http://stats.wordpress.com/g.gif?host=www.hostname.com&list_of_cookies_value_pairs;" alt="">
Google Analitycs:
http://www.google-analytics.com/__utm.gif?utmwv=4&utmn=769876874&etc
Now, it's obvious that the *.gif request has to reach a server side scripting language in order to read the parameters data and store them in a db.
Does anyone have an idea how to implement this in Zend?
UPDATE
Another thing I'm interested in is: How to avoid the user's browser to load the cached *.gif ? Will a random parameter value do the trick? Example: src="pixel.gif?nocache=random_number" where the nocache parameter value will be different on every request.
As Zend is built using PHP, it might be worth reading the following question and answer: Developing a tracking pixel.
In addition to this answer and as you're looking for a way of avoiding caching the tracking image, the easiest way of doing this is to append a unique/random string to it, which is generated at runtime.
For example, server-side and with the creation of each image, you might add a random URL id:
<?php
// Generate random id of min/max length
$rand_id = rand(8, 8);
// Echo the image and append a random string
echo "<img src='pixel.php?a=".$vara."&b=".$varb."&rand=".$rand_id."'>";
?>
Just adding my 2 cents to this thread because I think an important, and frequently used, option is missing: you don't necessarily need a scripting language to capture the request. A more efficient approach is to use the web server access log (like apache access log for instance) to log the request and then handle that log with whatever tools you see fit, like ELK stack for instance.
This makes serving the requests much lighter because no scripting language is loaded to prepare the response, just native apache response, which is typically much more efficient.
First of all, the *.gif doesn't need to be that file type, the only thing that is of interest is the Content-Type http header. Set that to image/gif (or any other, appropiate type) in the beginning, execute your code and render some sort of image to the response body.
Well, all of the above codes are correct and is good but to be certain, the guy above mention "g.gif"
You can just add a simple php code to write to an sql or fwrite("file.txt",$opened)
where var $opened serves as the counter++ if someone opened your mail... then save it as "g.gif"
TO DO all of this just add these:
<Files "/thisdirectory">
AddType application/x-httpd-php .gif
</Files>
to your ".htaccess" file but be sure to make a new directory for that g.gif or whatever.gif where the directory only contains g.gif and .htaccess
I'm out in the woods with this one: I have a universal, navigation-based app that displays data currently stored in a plist file. In a future release, I want to migrate the database to a JSON file on my server which the app can download to it's bundle, then parse it. Can anyone suggest a simple light-weight way of checking that the currently stored file in the bundle matches the version hosted on the server? Essentially checking for updates to the db without re-downloading the entire JSON file.
Here's a snippet of what the beginning of the JSON file currently looks like.
{
"version" : "0.2",
"description" : "1. Corrections to several entries.\n2. Added 21 new departments from Alameda & Fresno Counties.",
"counties" : {............... *Rest of the JSON file here* .....}
My idea was to store the "version" ("0.2") value to NSUserDefaults and use that value to check against the available JSON file online every time the app launches.
Am I on the right track or is there a better way of doing this altogether?
Thank you
Romeo
You can add the If-Modified-Since header to an instance of NSMutableURLRequest. If the document on the server has changed since that date, you'll get the data back. If it hasn't changed, you get a 304 Not Modified and no data.
This is much better than making a HEAD request because in the event of an updated file, you're only making one request instead of two.
Do a HEAD request (instead of a GET) and check the LastModified header. if the file has been modified since the last time you checked, download the file. Save the modified time somewhere to compare against next time.
You can set the http method on the request object like so:
[request setHTTPMethod:#"HEAD"];
I want to download a large file (> 500MB) to my application from the server. I used NSURLConnection, that works well if the network is very good. but sometimes I tried to download 500MB file, but 200MB or 100MB only downloaded if the network is not very good.That means I got the connectionDidFinishLoading method when the task was not completed.Someone says set a timeout second to avoid this situation,but i set timeout second 30s,it did not work.Should I set 60s or more? Does someone have better idea,please help me.
in connectionDidFinishLoading method every time check the length of data to download and the downloaded data.
length of the data to be download is gain by this [response expectedContentLength]; in didReceiveResponse method
You should download such big file in parts. Specify the Content-Range field in the header of your HTTP request and ask only for a small portion of the file at once. When you get all portions, you can assemble the file together.
You can set HTTP headers with NSMutableURLRequest setValue:#"0-1023/*" forHTTPHeaderField:#"Content-Range"];, this example downloads only the first kbyte of the file. See also Content-Range in http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html