Uploading Blobs in Blocks using REST API times out on second chunk - rest

NOTE: Could somebody give me an example SAS string (with the block info appended in the right area) that needs to be sent to the Azure blob storage? I think that's the issue that I'm having. I need to figure out the order of the uri, key, etc. in the string that is sent to Azure with each block.
What I'm trying to accomplish is to grab a SAS key from a service, modify the string key so that azure knows that I'm sending in blocks, and then send the individual blocks of the file with the sas key from the web client. I'm chunking each file into 2MB blocks and sending those 2MB blocks with a JavaScript library at a time. So each "file" in the code below is just a 2MB chunk of a file.
THE PROBLEM: I can successfully grab the SAS key from the service, modify the key so that it has the block chunk info in it, send in the FIRST chunk, and then receive a response back from the blob storage server. When I send in the second chunk, however, the request for a stream to the blob storage hangs and then eventually times out. The time-out seems to happen specifically on the second request for a stream to the blob storage. This bit of code right here:
SERVER WEB CLIENT CODE:
using (Stream requestStream = request.GetRequestStream())
{
inputStream.CopyTo(requestStream, file.ContentLength);
}
What could be causing the second chunk to time out? Could it be that the window for the key closes too soon? Below is my code and some screenshots:
private void WriteToBlob(HttpPostedFileBase file, string BlockId, FileProp fp)
{
var inputStream = file.InputStream;
Microsoft.WindowsAzure.StorageCredentialsSharedAccessSignature credentials =
new Microsoft.WindowsAzure.StorageCredentialsSharedAccessSignature(fp.facct);
string queryString = (new Uri(fp.folderName)).Query;
string RequestUri = string.Format(System.Globalization.CultureInfo.InvariantCulture, "{0}/{1}{2}&comp=block&blockid={3}",
fp.folderName, fp.fileName, queryString, Convert.ToBase64String(Encoding.UTF8.GetBytes(BlockId)));
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(RequestUri);
request.Method = "PUT";
request.ContentLength = inputStream.Length;
using (Stream requestStream = request.GetRequestStream())
{
inputStream.CopyTo(requestStream, file.ContentLength);
}
}
JAVASCRIPT CODE SENDING THE CHUNKS TO THE WEB SERVER CLIENT:
var running = 0;
var chunksize = (Modernizr.blobconstructor) ? uploadChunkSize : null; //if browser support Blob API
window.xhrPool = [];
$('#fileupload').fileupload({
url: url,
//formData: [{ name: 'param1', value: 1 }, { name: 'param2', value: 2}],
singleFileUploads: true, //each file is using an individual XHR request
//limitMultiFileUploads: 2, //This option is ignored, if singleFileUploads is set to true.
multipart: true,
maxChunkSize: chunksize, //server side is in streaming mode
sequentialUploads: true, //Set this option to true to issue all file upload requests in a sequential order instead of simultaneous requests.
dataType: 'json',
autoUpload: true,
//acceptFileTypes: /(\.|\/)(gif|jpe?g|png)$/i,
progressInterval: 100,
bitrateInterval: 100,
maxFileSize: uploadFileSizeLimit
}).on('fileuploadadd', function (e, data) {
var filename = data.files[0].name;
var filesize = data.files[0].size;
if (filesize == 0) {
var zeroSizeErrMsg = sceneLayoutService.format('This file {filename} is empty please select files again without it. ', { filename: filename });
sceneLayoutService.showErrorDialog(zeroSizeErrMsg);
return;
}
if (window.availableStorageSize != null && window.availableStorageSize != '') {
if (filesize > window.availableStorageSize) {
var overSizeErrMsg = sceneLayoutService.format('File size of {filename} exceeds available storage space in your cloud drive. ', { filename: filename });
sceneLayoutService.showErrorDialog(overSizeErrMsg);
return;
}
} else {
alert('Unable to retrieve the storage usage.');
}
data.jqXHR = data.submit();
window.xhrPool.push(data.jqXHR);
sceneLayoutService.addFileToProgressDialog(data, cancelButton);
}).on('fileuploadprocessalways', function (e, data) {
}).on('fileuploadprogressall', function (e, data) {
}).on('fileuploadsubmit', function (e, data) {
var filesize = data.files[0].size;
if (filesize == 0) {
return false;
}
if (window.availableStorageSize != null && window.availableStorageSize != '') {
if (filesize > window.availableStorageSize) {
return false;
}
}
$('#dlgProgress').parent().show();
running++;
sceneLayoutService.showProgressDialog('Uploading files to ' + currentUser + '\'s Cloud Storage ...', abortAllUploads);
return true;
}).on('fileuploaddone', function (e, data) {
running--;
updateStorageQuota(function () {
var usedStorageSize = (window.usedStorageSize != null) ? bytesToSize(window.usedStorageSize, 2) : 0;
var totalStorageSize = (window.totalStorageSize != null) ? bytesToSize(window.totalStorageSize, 2) : 0;
var usageFooterStr = sceneLayoutService.format("Using {used} of {total} (%)", { used: usedStorageSize, total: totalStorageSize });
$('div.dlgProgressFooter').text(usageFooterStr);
});
var docGridUrl = window.baseUrl + '/CloudStorage/ChangePage?page=1&rand=' + sceneLayoutService.getRandomString(4);
$('#docGridPartial').load(docGridUrl, function () {
grid.init({
pageNumber: 1,
url: window.baseUrl + '/CloudStorage/ChangePage',
sortColumn: '',
sortDirection: ''
});
});
sceneLayoutService.updateFileUploadFinalStatus(data, 'done');
if (!data.result.success) {
debugger;
var errMsg = "";
if (data.result != null) {
if (data.result.message != null) {
errMsg += data.result.message;
}
if (data.result.error != null)
errMsg += data.result.error;
}
sceneLayoutService.showErrorDialog(errMsg);
}
window.removeXHRfromPool(data);
if (running == 0) {
$('#dlgProgress').parent().hide();
$('#progresses').empty();
}
}).on('fileuploadfail', function (e, data) {
running--;
sceneLayoutService.updateFileUploadFinalStatus(data, 'fail');
window.removeXHRfromPool(data);
if (running == 0) {
$('#dlgProgress').parent().hide();
$('#progresses').empty();
}
}).on('fileuploadprogress', function (e, data) {
//XHR upload onProgress event not fired at server-defined intervals/not supported in IE8 and IE9,
//will be supported in IE10 in terms of XMLHttpRequest Level 2 specification, http://caniuse.com/xhr2
sceneLayoutService.updateFileUploadProgress(data);
});

Issue resolved. It turns out the format of the SAS URI was incorrect. Here is how a Sas URI (for a container, in my case) should look:
http://container_uri/filename?key

Related

Pg-promise - How to stream binary data directly to response

Forgive me I'm still learning. I'm trying to download some mp3 files that I have stored in a table. I can download files directly from the file system like this:
if (fs.existsSync(filename)) {
res.setHeader('Content-disposition', 'attachment; filename=' + filename);
res.setHeader('Content-Type', 'application/audio/mpeg3');
var rstream = fs.createReadStream(filename);
rstream.pipe(res);
I have stored the data in the table using pg-promise example in the docs like so:
const rs = fs.createReadStream(filename);
function receiver(_, data) {
function source(index) {
if (index < data.length) {
return data[index];
}
}
function dest(index, data) {
return this.none('INSERT INTO test_bin (utterance) VALUES($1)', data);
}
return this.sequence(source, {dest});
} // end receiver func
rep.tx(t => {
return streamRead.call(t, rs, receiver);
})
.then(data => {
console.log('DATA:', data);
})
.catch(error => {
console.log('ERROR: ', error);
});
But now I want to take that data out of the table and download it to the client. The example in the docs of taking data out of binary converts it to JSON and then prints it to the console like this:
db.stream(qs, s => {
s.pipe(JSONStream.stringify()).pipe(process.stdout)
})
and that works. So the data is coming out of the database ok. But I can't seem to send it to the client. It seems that the data is already a stream so I have tried:
db.stream(qs, s => {
s.pipe(res);
});
But I get a typeerror: First argument must be a string or Buffer
Alternatively, I could take that stream and write it to the file system, and then serve it as in the top step above, but that seems like a workaround. I wish there was an example of how to save to a file in the docs.
What step am I missing?

AEM - How to tweak activation error message

We are working in an AEM 6.1 environment and have created an activation preprocessor that will stop pages from being activated if certain attributes are not set. That works great but we'd also like to change the error message that's displayed by the activation process when the preprocessor throws a ReplicationExcdeption. Can anyone point me to the code that actually displays the error message?
We overrided several functions in SiteAdmin.Actions.js. Copy it from libs folder /apps/cq/ui/widgets/source/widgets/wcm/SiteAdmin.Actions.js or use CQ.Ext.override
We need to override CQ.wcm.SiteAdmin.scheduleForActivation and CQ.wcm.SiteAdmin.internalActivatePage methods.
We do it with using the following code
CQ.wcm.SiteAdmin.internalActivatePage = function(paths, callback) {
if (callback == undefined) {
// assume scope is admin and reload grid
var admin = this;
callback = function(options, success, response) {
if (success) admin.reloadPages();
else admin.unmask();
};
}
preActionCallback = function(options, success, response) {
if (success) {
var responseObj = CQ.Util.eval(response);
if (responseObj.activation) {
CQ.HTTP.post(
CQ.shared.HTTP.externalize("/bin/replicate.json"),
callback,
{ "_charset_":"utf-8", "path":paths, "cmd":"Activate" }
);
} else {
CQ.wcm.SiteAdmin.preactivateMessage(responseObj);
}
}else{
CQ.Ext.Msg.alert(
CQ.I18n.getMessage("Error"), CQ.I18n.getMessage("Could not activate page."));
}
admin.unmask();
};
CQ.HTTP.get(
"/apps/sling/servlet/content/preActivateValidator.html?path=" + paths,
preActionCallback
);
};
This path /apps/sling/servlet/content/preActivateValidator.html (You can use any other link and extension) returns json with some info about messages, which are parsed in custom method and generates custom error messages CQ.wcm.SiteAdmin.preactivateMessage:
CQ.wcm.SiteAdmin.preactivateMessage = function(responseObj) {
var message = "";
var incorrectItems = responseObj.incorrectItems;
if (responseObj.countOfIncorrectItems > 1) message = message + "s";
if (responseObj.missingMetadata) {
message = message + "Please, set \"Programming Type\" for next videos:<br/>";
var missingMetadataPaths = responseObj.missingMetadata;
for(var i = 0; i < missingMetadataPaths.length; i++){
message = message + ""+missingMetadataPaths[i].path+"<br/>";
}
message += "<br/>";
}
if(message == ""){
message = "Unknown error.";
}
CQ.Ext.Msg.alert(
CQ.I18n.getMessage("Error"), CQ.I18n.getMessage(message));
}
So you can implement component or servlet which will verify your attributes and will generate JSON.

Conversion of Facebook payload to sha1 value to check and match with x-hub-signation

I am trying to implement facebook webhook security.
The below code works fine for text messages but the moment attachments are sent , the sha value doesnot match.
I tried calculating on the escaped Unicode lowercase payload but then ended up having different sha value for simple texts as well.
Any help will be greatly appreciated .
byte[] payloadBytes = request.inputStream.bytes
String hashReceived = xHubSignature.substring(5)
String hashComputed = HmacUtils.hmacSha1Hex(facebookAppSecret.getBytes(StandardCharsets.UTF_8), payloadBytes)
log.info('Received {} computed {}', hashReceived, hashComputed)
Turns out the problem was in the way I was accessing the data, something like this:
var express = require('express');
var app = express()
// Other imports
app.listen(app.get('port'), ()=>
{console.log('running on port', app.get('port'))});
The request body was accessed like this:
app.post('/webhook/', (req, res)=>
{
let body = req.body;
//By this time the encoded characters were already decoded and hence the hashcheck was failing.
//processing the data
});
The solution was to use natice httpServer to create the server and access the data so that the hashcheck was done on the raw data.
Probably this can be done using Express as well but it was not working for me.
This is what I did.
const http = require('http');
const url = require("url");
const crypto = require('crypto');
http.createServer((req, res) => {
res.setHeader('Content-Type', 'text/html; charset=utf-8')
let body = '';
req.on('data', chunk => {
body += chunk
});
if (urlItems.pathname === '/facebook/' && req.method === 'POST') {
req.on('end', () => {
let hmac = crypto.createHmac('sha1', appSecret);
hmac.update(body, 'utf-8');
let computedSig = `sha1=${hmac.digest('hex')}`;
if (req.headers['x-hub-signature'] === computedSig) {
console.log(`${computedSig} matched ${req.headers['x-hub-signature']}`);
} else {
console.log(`Found ${computedSig} instead of ${req.headers['x-hub-signature']}`);
}
res.end(JSON.stringify({ status: 'ok' }))
})
}
}).listen(process.env.PORT || 3000);
EDIT 1 : Due change in infra , we switched to Node hence the node code.

Using Sailsjs Skipper file uploading with Flowjs

I'm trying to use skipper and flowjs with ng-flow together for big file uploading.
Based on sample for Nodejs located in flowjs repository, I've created my sails controller and service to handle file uploads. When I uploading a small file it's works fine, but if I try to upload bigger file (e.g. video of 200 Mb) I'm receiving errors (listed below) and array req.file('file')._files is empty. Intersting fact that it happening only few times during uploading. For example, if flowjs cut the file for 150 chunks, in sails console these errors will appear only 3-5 times. So, almost all chunks will uploaded to the server, but a few are lost and in result file is corrupted.
verbose: Unable to expose body parameter `flowChunkNumber` in streaming upload! Client tried to send a text parameter (flowChunkNumber) after one or more files had already been sent. Make sure you always send text params first, then your files.
These errors appears for all flowjs parameters.
I know about that text parameters must be sent first for correct work with skipper. And in chrome network console I've checked that flowjs sends this data in a correct order.
Any suggestions?
Controller method
upload: function (req, res) {
flow.post(req, function (status, filename, original_filename, identifier) {
sails.log.debug('Flow: POST', status, original_filename, identifier);
res.status(status).send();
});
}
Service post method
$.post = function(req, callback) {
var fields = req.body;
var file = req.file($.fileParameterName);
if (!file || !file._files.length) {
console.log('no file', req);
file.upload(function() {});
}
var stream = file._files[0].stream;
var chunkNumber = fields.flowChunkNumber;
var chunkSize = fields.flowChunkSize;
var totalSize = fields.flowTotalSize;
var identifier = cleanIdentifier(fields.flowIdentifier);
var filename = fields.flowFilename;
if (file._files.length === 0 || !stream.byteCount)
{
callback('invalid_flow_request', null, null, null);
return;
}
var original_filename = stream.filename;
var validation = validateRequest(chunkNumber, chunkSize, totalSize, identifier, filename, stream.byteCount);
if (validation == 'valid')
{
var chunkFilename = getChunkFilename(chunkNumber, identifier);
// Save the chunk by skipper file upload api
file.upload({saveAs:chunkFilename},function(err, uploadedFiles){
// Do we have all the chunks?
var currentTestChunk = 1;
var numberOfChunks = Math.max(Math.floor(totalSize / (chunkSize * 1.0)), 1);
var testChunkExists = function()
{
fs.exists(getChunkFilename(currentTestChunk, identifier), function(exists)
{
if (exists)
{
currentTestChunk++;
if (currentTestChunk > numberOfChunks)
{
callback('done', filename, original_filename, identifier);
} else {
// Recursion
testChunkExists();
}
} else {
callback('partly_done', filename, original_filename, identifier);
}
});
};
testChunkExists();
});
} else {
callback(validation, filename, original_filename, identifier);
}};
Edit
Found solution to set flowjs property maxChunkRetries: 5, because by default it's 0.
On the server side, if req.file('file')._files is empty I'm throwing not permanent(in context of flowjs) error.
So, it's solves my problem, but question why it behave like this is still open. Sample code for flowjs and Nodejs uses connect-multiparty and has no any additional error handling code, so it's most likely skipper bodyparser bug.

getting data from mongodb collection

Trying to get some messages from a db collection, I've tried that code (server side):
MongoClient.connect('mongodb://127.0.0.1:27017/gt-chat', function(err, db) {
if(err) throw err;
var history="";
var collection = db.collection('gt-chat');
console.log("******************************Printing docs from Cursor Each")
collection.find({}, {_id: 0}).sort({$natural: 1}).limit(20).each(function(err, doc) {
console.log(doc)
if(doc != null) {
console.log("Doc from Each ");
history=history+console.dir(doc)+"\r\n";
console.dir(doc);
}
});
console.log("/////////////HISTORY//////////////////");
console.log(history); ; // data is shown there under the shell console, all is fine !
socket.emit('updatechat', 'SERVER', history); // should send the messages using socket
});
and then on the client side, I've tried :
// listener, whenever the server emits 'updatechat', this updates the chat body
socket.on('updatechat', function (username, data) {
date = new Date;
h = date.getHours();
if (h<10) {
h = "0"+h;
}
m = date.getMinutes();
if (m<10) {
m = "0"+m;
}
s = date.getSeconds();
if (s<10) {
s = "0"+s;
}
var time = h+":"+m+":"+s;
$('#conversation').append('<b><strong>' + time + '</strong> '+username + ':</b> ' + data + '<br>');
});
which doesn't show anything as expected :(
I am quite sure about the updatechat function because I can get message with it, as I've tried:
socket.emit('updatechat', 'SERVER',"this is history"); // < message is well sent, so no problem there!
but I don't get the all history.
So the goal is to get some messages from a mongodb collection and to display them under the browser using socket emit
Under the shell, to debug, it's showing:
Doc from Each { message: '<strong>12:16:27</strong><span style=\'color:#2fed7e\'><b>guibs</b>< /span><em> hum</em>' } { message: '<strong>12:16:27</strong><span style=\'color:#2fed7e\'><b>guibs</b>< /span><em> hum</em>' }
So, data and messages exist, but I can't read them from the browser. The shell says: 'history is not defined' ! What is wrong with concatenation? And my variable declaration? I don't understand, thanks for your help!