How to restrict some file types from uploading in google-cloud-storage, and also determine if the file is a document or an image? - google-cloud-storage

I am using the code as given in the firebase storage documentation to upload files to my project.
Here is the code:
var metadata = {
contentType: 'image/jpeg'
};
var uploadTask = storageRef.child('images/' + file.name).put(file, metadata);
uploadTask.on(firebase.storage.TaskEvent.STATE_CHANGED,
(snapshot) => {
var progress = (snapshot.bytesTransferred / snapshot.totalBytes) * 100;
console.log('Upload is ' + progress + '% done');
switch (snapshot.state) {
case firebase.storage.TaskState.PAUSED:
console.log('Upload is paused');
break;
case firebase.storage.TaskState.RUNNING:
console.log('Upload is running');
break;
}
},
(error) => {
switch (error.code) {
case 'storage/unauthorized':
// User doesn't have permission to access the object
break;
case 'storage/canceled':
// User canceled the upload
break;
// ...
case 'storage/unknown':
// Unknown error occurred, inspect error.serverResponse
break;
}
},
() => {
uploadTask.snapshot.ref.getDownloadURL().then((downloadURL) => {
console.log('File available at', downloadURL);
});
}
);
I want to add some functionalities. The first functionality is that I could determine whether the file being uploaded is a document or a photo. The second functionality is that I will only allow pdf and jpeg format files to be uploaded.
In order to achieve these functionalities does firebase storage provides any function or do I need to manually determine the file type from the file name by using the split function?

Cloud Storage does not offer content type restriction enforcement except for Signed URLs which can enforce the HTTP content-type header.
Do not use the filename to determine file types. There are good libraries that offer more reliable content type detection. Detecting PDF and JPEG can be easily performed by reading the first few bytes of the file. That is not a guarantee as random data can also have the required byte values in key data locations.

Related

How can I get a continuous stream of samples from the JavaScript AudioAPI

I'd like to get a continuous stream of samples in JavaScript from the audio API. The only way I've found to get samples is through the MediaRecorder object in the JavaScript Audio API.
I set up my recorder like this:
var options = {
mimeType: "audio/webm;codec=raw",
}
this.mediaRecorder = new MediaRecorder(stream, options);
this.mediaRecorder.ondataavailable = function (e) {
this.decodeChunk(e.data);
}.bind(this);
this.mediaRecorder.start(/*timeslice=*/ 100 /*ms*/);
This gives me a callback 10 times a second with new data. All good so far.
The data is encoded, so I use audioCtx.decodeAudioData to process it:
let fileReader = new FileReader();
fileReader.onloadend = () => {
let encodedData = fileReader.result;
// console.log("Encoded length: " + encodedData.byteLength);
this.audioCtx.decodeAudioData(encodedData,
(decodedSamples) => {
let newSamples = decodedSamples.getChannelData(0)
.slice(this.firstChunkSize, decodedSamples.length);
// The callback which handles the decodedSamples goes here. All good.
if (this.firstChunkSize == 0) {
this.firstChunkSize = decodedSamples.length;
}
});
};
This all works fine too.
Setting up the data for the file reader is where it gets strange:
let blob;
if (!this.firstChunk) {
this.firstChunk = chunk;
blob = new Blob([chunk], { 'type': chunk.type });
} else {
blob = new Blob([this.firstChunk, chunk], { 'type': chunk.type });
}
fileReader.readAsArrayBuffer(blob);
The first chunk works just fine, but the second and later chunks fail to decode unless I combine them with the first chunk. I'm guessing what is happening here is that the first chunk has a header that is required to decode the data. I remove the samples decoded from the first chunk after decoding them a second time. See this.firstChunkSize above.
This all executes without error, but the audio that I get back has a vibrato-like effect at 10Hz. A few hypotheses:
I have some simple mistake in my "firstChunkSize" and "splice" logic
The first chunk has some header which is causing the remaining data to be interpreted in a strange way.
There is some strange interaction with some option when creating the audio source (noise cancellation?)
You want codecs=, not codec=.
var options = {
mimeType: "audio/webm;codecs=pcm",
}
Though MediaRecorder.isSupported will return true with codec= it is only because this parameter is being ignored. For example:
MediaRecorder.isTypeSupported("audio/webm;codec=pcm")
true
MediaRecorder.isTypeSupported("audio/webm;codecs=pcm")
true
MediaRecorder.isTypeSupported("audio/webm;codecs=asdfasd")
false
MediaRecorder.isTypeSupported("audio/webm;codec=asdfasd")
true
The garbage codec name asdfasd is "supported" if you specify codec instead of codecs.

How to retrieve data entered in content control in webaddin

I have a richtext content control named firstname in a Word document. I am trying to access its content but am not able retrieve it.
This is a sample method given in msdn. Using it I am able to get the control's id and its type but not the data. Please let me know whether any way to access the same?
function bindContentControl() {
Office.context.document.bindings.addFromNamedItemAsync(
'FirstName', Office.BindingType.Text, {id:'firstName'},
function (result) {
if (result.status === Office.AsyncResultStatus.Succeeded) {
write('Control bound. Binding.id: ' + result.value.id + ' Binding.type: ' + result.value.type); }
else {
write('Error:', result.error.message);
}
});
}
// Function that writes to a div with id='message' on the page.
Funktion write(message){
document.getElementById('message').innerText += message; }  
The sample code you provided creates a binding to an object with the name 'FirstName'.
You will want to use context.document.contentControls.getByTitle() to retrieve the content control of a given name. Here's my sample code.
await Word.run(async (context) => {
let controls = context.document.contentControls.getByTitle("FirstName");
controls.load();
await context.sync();
//assuming there's only one para.
controls.items[0].paragraphs.load();
await context.sync();
console.log(controls.items[0].paragraphs.items[0].text);
});

Pg-promise - How to stream binary data directly to response

Forgive me I'm still learning. I'm trying to download some mp3 files that I have stored in a table. I can download files directly from the file system like this:
if (fs.existsSync(filename)) {
res.setHeader('Content-disposition', 'attachment; filename=' + filename);
res.setHeader('Content-Type', 'application/audio/mpeg3');
var rstream = fs.createReadStream(filename);
rstream.pipe(res);
I have stored the data in the table using pg-promise example in the docs like so:
const rs = fs.createReadStream(filename);
function receiver(_, data) {
function source(index) {
if (index < data.length) {
return data[index];
}
}
function dest(index, data) {
return this.none('INSERT INTO test_bin (utterance) VALUES($1)', data);
}
return this.sequence(source, {dest});
} // end receiver func
rep.tx(t => {
return streamRead.call(t, rs, receiver);
})
.then(data => {
console.log('DATA:', data);
})
.catch(error => {
console.log('ERROR: ', error);
});
But now I want to take that data out of the table and download it to the client. The example in the docs of taking data out of binary converts it to JSON and then prints it to the console like this:
db.stream(qs, s => {
s.pipe(JSONStream.stringify()).pipe(process.stdout)
})
and that works. So the data is coming out of the database ok. But I can't seem to send it to the client. It seems that the data is already a stream so I have tried:
db.stream(qs, s => {
s.pipe(res);
});
But I get a typeerror: First argument must be a string or Buffer
Alternatively, I could take that stream and write it to the file system, and then serve it as in the top step above, but that seems like a workaround. I wish there was an example of how to save to a file in the docs.
What step am I missing?

Mongodb, mongoose on nodejs, saving photo in db

I have a problem with saving default photo in my mongodb. User can upload photo and it's saving in db with no problems. But I want to add a default photo, when user didn't upload his photo.
This is the part of code I added:
busboy.on('file', (fieldname, file, filename, encoding, mimetype) => {
console.log("file");
defaultPhoto = false;
file.pipe(fs.createWriteStream(saveTo));
newProfile.photo.contentType = mimetype;
});
busboy.on('field', (fieldname, val, fieldnameTruncated, valTruncated, encoding, mimetype) => {
if(fieldname == "voice") {
newProfile.voice.data = val;
newProfile.voice.contentType = 'audio/webm';
} else {
newProfile[fieldname] = val;
}
});
busboy.on('finish', () => {
if(defaultPhoto) {
newProfile.photo.contentType = 'image/png';
newProfile.photo.data = fs.readFileSync(path.join(__dirname + '/../images/', "profile-default.png"));
} else {
newProfile.photo.data = fs.readFileSync(saveTo);
fs.unlink(saveTo);
}
newProfile.alias = newProfile.firstName + "" + newProfile.surname;
newProfile.alias = newProfile.alias.toLowerCase();
Profile.addProfile(newProfile, (err) => {
if(err) console.log(err);
})
With this code, uploading working ok, but when user didn't upload his photo I have error :
ValidationError: Profile validation failed: photo: Cast to Object failed for value "null" at path "photo"
Thanks for help.
Firstly, it's usually not a good idea to store images in a database for performance reasons. It would be better to store the images on your server and then store references to them in your database.
Secondly, I am guessing that this line is causing the problem:
newProfile.photo.data = fs.readFileSync(path.join(__dirname + '/../images/', "profile-default.png"));
newProfile.photo.data is getting set to null because the call to readFileSync is returning null. Check that you actually have the default profile photo stored in the directory that you are passing to it.

Using Sailsjs Skipper file uploading with Flowjs

I'm trying to use skipper and flowjs with ng-flow together for big file uploading.
Based on sample for Nodejs located in flowjs repository, I've created my sails controller and service to handle file uploads. When I uploading a small file it's works fine, but if I try to upload bigger file (e.g. video of 200 Mb) I'm receiving errors (listed below) and array req.file('file')._files is empty. Intersting fact that it happening only few times during uploading. For example, if flowjs cut the file for 150 chunks, in sails console these errors will appear only 3-5 times. So, almost all chunks will uploaded to the server, but a few are lost and in result file is corrupted.
verbose: Unable to expose body parameter `flowChunkNumber` in streaming upload! Client tried to send a text parameter (flowChunkNumber) after one or more files had already been sent. Make sure you always send text params first, then your files.
These errors appears for all flowjs parameters.
I know about that text parameters must be sent first for correct work with skipper. And in chrome network console I've checked that flowjs sends this data in a correct order.
Any suggestions?
Controller method
upload: function (req, res) {
flow.post(req, function (status, filename, original_filename, identifier) {
sails.log.debug('Flow: POST', status, original_filename, identifier);
res.status(status).send();
});
}
Service post method
$.post = function(req, callback) {
var fields = req.body;
var file = req.file($.fileParameterName);
if (!file || !file._files.length) {
console.log('no file', req);
file.upload(function() {});
}
var stream = file._files[0].stream;
var chunkNumber = fields.flowChunkNumber;
var chunkSize = fields.flowChunkSize;
var totalSize = fields.flowTotalSize;
var identifier = cleanIdentifier(fields.flowIdentifier);
var filename = fields.flowFilename;
if (file._files.length === 0 || !stream.byteCount)
{
callback('invalid_flow_request', null, null, null);
return;
}
var original_filename = stream.filename;
var validation = validateRequest(chunkNumber, chunkSize, totalSize, identifier, filename, stream.byteCount);
if (validation == 'valid')
{
var chunkFilename = getChunkFilename(chunkNumber, identifier);
// Save the chunk by skipper file upload api
file.upload({saveAs:chunkFilename},function(err, uploadedFiles){
// Do we have all the chunks?
var currentTestChunk = 1;
var numberOfChunks = Math.max(Math.floor(totalSize / (chunkSize * 1.0)), 1);
var testChunkExists = function()
{
fs.exists(getChunkFilename(currentTestChunk, identifier), function(exists)
{
if (exists)
{
currentTestChunk++;
if (currentTestChunk > numberOfChunks)
{
callback('done', filename, original_filename, identifier);
} else {
// Recursion
testChunkExists();
}
} else {
callback('partly_done', filename, original_filename, identifier);
}
});
};
testChunkExists();
});
} else {
callback(validation, filename, original_filename, identifier);
}};
Edit
Found solution to set flowjs property maxChunkRetries: 5, because by default it's 0.
On the server side, if req.file('file')._files is empty I'm throwing not permanent(in context of flowjs) error.
So, it's solves my problem, but question why it behave like this is still open. Sample code for flowjs and Nodejs uses connect-multiparty and has no any additional error handling code, so it's most likely skipper bodyparser bug.