Losing file properties on resumeable upload Google Cloud Storage - google-cloud-storage

I'm running into trouble when using Google Cloud storage's resume-able upload for music and video files. Namely that certain properties are lost when uploaded and then downloaded back from the bucket.
Details: (original file on the left, downloaded file on the right)
General: (original file on the left, downloaded file on the right)
This isn't necessarily a problem for audio but it is for video as the browser now won't playback it in-browser.
The process for uploading is much the same as this question
A small code sample that does the intial resumeable upload:
func StoreUpload(c appengine.Context, cn context.Context, contentType string, filename string, email string, origin string) (string, string, error) {
uuid, err := UUID()
if err != nil{
return "", "", err
}
filename = uuid + filename[len(filename)-4:]
tokenSource := google.AppEngineTokenSource(cn, storage.ScopeFullControl)
token, err := tokenSource.Token()
if err != nil{
return "", "", err
}
metaBody := []byte("{ \"metadata\": { \"x-goog-meta-uploader\": \""+ email +"\" }}")
req, err := http.NewRequest(
"POST",
fmt.Sprintf("https://www.googleapis.com/upload/storage/v1/b/%s/o?uploadType=resumable&name=upload/%s", models.HYLIGHT_EXTERNAL_BUCKET, filename),
bytes.NewReader(metaBody),
)
req.Header.Set("Authorization", "Bearer " + token.AccessToken)
req.Header.Set("X-Upload-Content-Type", contentType)
req.Header.Set("Content-Type", "application/json; charset=UTF-8")
req.Header.Set("Content-Length", fmt.Sprint(len(metaBody)))
req.Header.Set("Origin", origin)
client := &http.Client{
Transport: &urlfetch.Transport{
Context: c,
Deadline: 20 * time.Second,
},
}
res, err := client.Do(req)
if err != nil{
return "", "", err
}
return res.Header.Get("Location"), filename, err
}
The result in the google cloud bucket will have the correct mimetype of video/mp4 (that was decided by the browser) but still can't be viewed on the browser.
EDIT:
I've also tried using the chrome extension 'Postman' to upload a file after receiving an resumeable upload link but also its properties are lost when uploading to GCS, so it doesn't seem to be related to the JS side involved in uploading a file to GCS.
If I directly upload a folder with the video file in it using the 'upload folder' button on console.developer.google, the file's properties are retained.

It turns out that the file is being corrupted when submitted to Google Cloud Storage via an HTML "input" form from the browser. However, if the same URL is used to post the file via JavaScript, the file is not corrupted, which is very strange.
I am having the GCS team looking into the issue to see if there is a fix.
The full details, along with a workaround, are here:
https://code.google.com/p/googleappengine/issues/detail?id=12268
Thank you for working with me to get to the bottom of this!

Related

Trying to access IBM cloud regions outside of the US but fail due to unauth

Trying to use the bluemix golang client library and 'connect' to each region in the world and get our bluemix instance id to issue subsequent resource queries.
However, anything but us-south returns an unauth. I am trying to not use the Cloudfoundry API, so what is the best practice to enable my organization to have the ability to issue resource requests in the different regions? The IAM pages do not appear to have a region association like the Cloudfoundry api has.
This cobra command structure appears to solve the problem.
package cmd
import (
"github.com/IBM-Cloud/bluemix-go"
"github.com/IBM-Cloud/bluemix-go/authentication"
"github.com/IBM-Cloud/bluemix-go/rest"
"github.com/IBM-Cloud/bluemix-go/session"
"github.com/davecgh/go-spew/spew"
"github.com/spf13/cobra"
"log"
"os"
)
var (
debugFlag bool
region string
)
// tokenCmd represents the token command
var tokenCmd = &cobra.Command{
Use: "token",
Short: "Get an IBM Cloud IAM bearer token",
Long: `Generate an IAM token.`,
Run: tokenCmdExecute,
}
func init() {
rootCmd.AddCommand(tokenCmd)
// Cobra supports Persistent Flags which will work for this command
// and all subcommands
tokenCmd.PersistentFlags().BoolVarP(&debugFlag, "debug", "d", false, "Enable debug")
tokenCmd.PersistentFlags().StringVarP(&region, "region", "r", "us-south", "IBM Cloud region to target")
}
func tokenCmdExecute(cmd *cobra.Command, args []string) {
// get the API key from the environment
apiKey := os.Getenv("BLUEMIX_API_KEY")
if apiKey == "" {
log.Fatal("No API key found: please specify a BLUEMIX_API_KEY environment variable containing the API key for your IBM Cloud account")
}
// create a session object that will generate an IAM token and cache it
cfg := &bluemix.Config{
BluemixAPIKey: apiKey,
Region: region,
Debug: debugFlag,
}
sess, err := session.New(cfg)
if err != nil {
log.Fatal(err)
}
iamClient, err := authentication.NewIAMAuthRepository(cfg, rest.NewClient())
if err != nil {
log.Fatal(err)
}
err = iamClient.AuthenticateAPIKey(sess.Config.BluemixAPIKey)
if err != nil {
log.Fatal(err)
}
log.Println(sess.Config.IAMAccessToken)
spew.Dump(sess.Config)
}
The regions are one of a set; the SDK does a lookup to find the appropriate end-points, and the set it supports are the regions in which IBM Cloud exists.
The problem was that in my first attempt, I was using an mccp client and that is a multi-cloud cloudfoundry api that uses a different auth and endpoint caching protocol.

GoLang: Send Mailjet email without Mailjet library

I am trying to send emails from my golang application using my Mailjet credentials, but I am trying to do it the normal golang way (yes, I know that their library is highly encouraged).
I have the emails working fine using the Mailjet library, but my boss made a really good point that we might not stay with Mailjet forever. If we switch to a different email solution, we don't want to have to rewrite all of our email code, we just want to change our hostname and credentials.
My printer sends emails just find through Mailjet using the same hostname and credentials, but for some reason my golang app won't!
My code was adopted from the golang smtp library SendEmail example.
Here it is (without my credentials, of course):
import (
"bytes"
"fmt"
"net/smtp"
)
func SendTestEmail() (bool, error) {
fmt.Println("Send Test Email: Enter")
success := false
hostname := "in-v3.mailjet.com"
auth := smtp.PlainAuth("", username, password, hostname)
to := []string{"me#example.com"}
msg := []byte("To: me#example.com\r\n" +
"Subject: discount Gophers!\r\n" +
"\r\n" +
"This is the email body.\r\n")
fmt.Println("Send Test Email: Sending Email")
err := smtp.SendMail(hostname+":587", auth, "sender#example.com", to, msg)
if err == nil {
fmt.Println("Send Test Email: Email successfully sent!")
success = true
} else {
fmt.Println("Send Test Email: Email failed to send", err)
}
fmt.Println("Send Test Email: Exit")
return success, err
}
Note that I am using port 587. I do not know if my printer is using 587 or 25, but it's working. I don't work when using port 25 either.
What is really weird is that smtp.SendEmail isn't returning any errors, but I still do not get any emails (yes, I am checking my junk folder)!
Also, when I log into Mailjet, I don't see that any emails were sent. I do see that an email was sent when I send something from the printer.
So, where is my email?!
Any help is greatly appreciated. Thanks!
First of all, thanks for choosing Mailjet as your email service provider! I'm leading the API Product and Developers Relations at Mailjet.
When it comes to send, you're right with SMTP. It's standard, widely supported and easy to switch (even if I don't hope we'll get there!). Our Go library will become handy when it comes to deal with our API to manage business processes.
I have several questions / feedback looking at your code:
I guess the "sender#example.com" from address used is not the one you use in your real code? Anyway, this email must have been validated on Mailjet side beforehands. See our dedicated guide
Seems you try to set some SMTP headers like Subject within the message, when it should be handled separately
Here's a working code I'm using to work with SMTP:
package main
import (
"log"
"net/smtp"
"fmt"
)
func main() {
auth := smtp.PlainAuth(
"",
"MAILJET_API_KEY",
"MAILJET_API_SECRET",
"in-v3.mailjet.com",
)
email := "foobar#test.com"
header := make(map[string]string)
header["From"] = email
header["To"] = email
header["Subject"] = "Hello Mailjet World!"
header["X-Mailjet-Campaign"] = "test"
message := ""
for k, v := range header {
message += fmt.Sprintf("%s: %s\r\n", k, v)
}
message += "\r\nHi! Thanks for using Mailjet."
err := smtp.SendMail(
"in-v3.mailjet.com:587",
auth,
email,
[]string{email},
[]byte(message),
)
if err != nil {
log.Printf("Error: %s", err)
} else {
log.Printf("Mail sent!")
}
}
Hope it helps! hAPI sending with Mailjet

Using Sailsjs Skipper file uploading with Flowjs

I'm trying to use skipper and flowjs with ng-flow together for big file uploading.
Based on sample for Nodejs located in flowjs repository, I've created my sails controller and service to handle file uploads. When I uploading a small file it's works fine, but if I try to upload bigger file (e.g. video of 200 Mb) I'm receiving errors (listed below) and array req.file('file')._files is empty. Intersting fact that it happening only few times during uploading. For example, if flowjs cut the file for 150 chunks, in sails console these errors will appear only 3-5 times. So, almost all chunks will uploaded to the server, but a few are lost and in result file is corrupted.
verbose: Unable to expose body parameter `flowChunkNumber` in streaming upload! Client tried to send a text parameter (flowChunkNumber) after one or more files had already been sent. Make sure you always send text params first, then your files.
These errors appears for all flowjs parameters.
I know about that text parameters must be sent first for correct work with skipper. And in chrome network console I've checked that flowjs sends this data in a correct order.
Any suggestions?
Controller method
upload: function (req, res) {
flow.post(req, function (status, filename, original_filename, identifier) {
sails.log.debug('Flow: POST', status, original_filename, identifier);
res.status(status).send();
});
}
Service post method
$.post = function(req, callback) {
var fields = req.body;
var file = req.file($.fileParameterName);
if (!file || !file._files.length) {
console.log('no file', req);
file.upload(function() {});
}
var stream = file._files[0].stream;
var chunkNumber = fields.flowChunkNumber;
var chunkSize = fields.flowChunkSize;
var totalSize = fields.flowTotalSize;
var identifier = cleanIdentifier(fields.flowIdentifier);
var filename = fields.flowFilename;
if (file._files.length === 0 || !stream.byteCount)
{
callback('invalid_flow_request', null, null, null);
return;
}
var original_filename = stream.filename;
var validation = validateRequest(chunkNumber, chunkSize, totalSize, identifier, filename, stream.byteCount);
if (validation == 'valid')
{
var chunkFilename = getChunkFilename(chunkNumber, identifier);
// Save the chunk by skipper file upload api
file.upload({saveAs:chunkFilename},function(err, uploadedFiles){
// Do we have all the chunks?
var currentTestChunk = 1;
var numberOfChunks = Math.max(Math.floor(totalSize / (chunkSize * 1.0)), 1);
var testChunkExists = function()
{
fs.exists(getChunkFilename(currentTestChunk, identifier), function(exists)
{
if (exists)
{
currentTestChunk++;
if (currentTestChunk > numberOfChunks)
{
callback('done', filename, original_filename, identifier);
} else {
// Recursion
testChunkExists();
}
} else {
callback('partly_done', filename, original_filename, identifier);
}
});
};
testChunkExists();
});
} else {
callback(validation, filename, original_filename, identifier);
}};
Edit
Found solution to set flowjs property maxChunkRetries: 5, because by default it's 0.
On the server side, if req.file('file')._files is empty I'm throwing not permanent(in context of flowjs) error.
So, it's solves my problem, but question why it behave like this is still open. Sample code for flowjs and Nodejs uses connect-multiparty and has no any additional error handling code, so it's most likely skipper bodyparser bug.

Looking for Starter Resources for the Soundcloud API in GoLang

I've unsuccessfully tried to access the Soundcloud API with Go. For any language that isn't directly support by Soundcloud, their API is very convoluted. If anyone has any resources or code examples, I'd appreciate it if someone shared them with me.
My code is as follows:
func main() {
v := url.Values{}
v.Set("scope", "non-expiring")
v.Set("client_id", auth.ClientID)
v.Set("response_type", "code")
v.Set("redirect_uri", auth.RedirectURI)
c.AuthURL = AuthEndpoint + "?" + v.Encode()
c.Values = v.Encode()
res := c.Request("POST", url.Values{})
}
func (c *Client) Request(method string, params url.Values) []byte {
params.Set("client_id", "*************")
reqUrl := "https://api.soundcloud.com/oauth2/token"
req, _ := http.NewRequest(method, reqUrl, strings.NewReader(c.Values))
req.Header.Add("Accept", "application/json")
resp, _ := c.client.Do(req)
defer resp.Body.Close()
body, _ := ioutil.ReadAll(resp.Body)
return body
}
i'm active developing a package in go to access/work with Soundcloud API, it has OAuth2 support, and is already usable.
I invite you to look for it. https://github.com/njasm/gosoundcloud
Take in consideration that the package is still under heavy development, the API might change in the future.
You can have a look at yanatan16/golang-soundcloud, even though then authentication part isn't implemented yet (see issues)
There is an oauth class though.
Ad quite a few other calls to the API, for getting SoundCloud objects.

Meteor: Saving images from urls to AWS S3 storage

I am trying, server-side, to take an image from the web by it's url (i.e. http://www.skrenta.com/images/stackoverflow.jpg) and save this image to my AWS S3 bucket using Meteor, the aws-sdk meteorite package as well as the http meteor package.
This is my attempt, which indeed put a file in my bucket (someImageFile.jpg), but the image file is corrupted then and cannot be displayed by a browser or a viewer application.
Probably I am doing something wrong with the encoding of the file. I tried many combinations and none of them worked. Also, I tried adding ContentLength and/or ContentEncoding with different encodings like binary, hex, base64 (also in combination with Buffer.toString("base64"), none of them worked. Any advice will be greatly appreciated!
This is in my server-side-code:
var url="http://www.skrenta.com/images/stackoverflow.jpg";
HTTP.get(url, function(err, data) {
if (err) {
console.log("Error: " + err);
} else {
//console.log("Result: "+JSON.stringify(data));
//uncommenting above line fills up the console with raw image data
s3.putObject({
ACL:"public-read",
Bucket:"MY_BUCKET",
Key: "someImageFile.jpg",
Body: new Buffer(data.content,"binary"),
ContentType: data.headers["content-type"], // = image/jpeg
//ContentLength: parseInt(data.headers["content-length"]),
//ContentEncoding: "binary"
},
function(err,data){ // CALLBACK OF HTTP GET
if(err){
console.log("S3 Error: "+err);
}else{
console.log("S3 Data: "+JSON.stringify(data));
}
}
);
}
});
Actually I am trying to use the filepicker.io REST API via HTTP calls, i.e. for storing a converted image to my s3, but for this problem this is the minimum example to demonstrate the actual problem.
After several trial an error runs I gave up on Meteor.HTTP and put together the code below, maybe it will help somebody when running into encoding issues with Meteor.HTTP.
Meteor.HTTP seems to be meant to just fetch some JSON or text data from remote APIs and such, somehow it seems to be not quiet the choice for binary data. However, the Npm http module definitely does support binary data, so this works like a charm:
var http=Npm.require("http");
url = "http://www.whatever.com/check.jpg";
var req = http.get(url, function(resp) {
var buf = new Buffer("", "binary");
resp.on('data', function(chunk) {
buf = Buffer.concat([buf, chunk]);
});
resp.on('end', function() {
var thisObject = {
ACL: "public-read",
Bucket: "mybucket",
Key: "myNiceImage.jpg",
Body: buf,
ContentType: resp.headers["content-type"],
ContentLength: buf.length
};
s3.putObject(thisObject, function(err, data) {
if (err) {
console.log("S3 Error: " + err);
} else {
console.log("S3 Data: " + JSON.stringify(data));
}
});
});
});
The best solution is to look at what has already been done in this regard:
https://github.com/Lepozepo/S3
Also filepicker.so seems pretty simple:
Integrating Filepicker.IO with Meteor