On Keras and batch ground truth. - neural-network

I'm trying to implement some loss function into Keras with Tensorflow backend. The said function being kind of tricky, I would like to come up with the following scheme:
Through the all network, I would like to keep some information I(B) depending on the passed batch B from the input. I know this can be done by using a multi-input /output network.
More precisely, for each batch B, I would like to retrieve the labels B_true associated, because my function I is a function of this ground truth, i.e. I(B_true)
Is this possible?
Thank you a lot!

Related

How to create a "Denoising Autoencoder" in Matlab?

I know Matlab has the function TrainAutoencoder(input, settings) to create and train an autoencoder. The result is capable of running the two functions of "Encode" and "Decode".
But this is only applicable to the case of normal autoencoders. What if you want to have a denoising autoencoder? I searched and found some sample codes, where they used the "Network" function to convert the autoencoder to a normal network and then Train(network, noisyInput, smoothOutput)like a denoising autoencoder.
But there are multiple missing parts:
How to use this new network object to "encode" new data points? it doesn't support the encode().
How to get the "latent" variables to the features, out of this "network'?
I appreciate if anyone could help me resolve this issue.
Thanks,
-Moein
At present (2019a), MATALAB does not permit users to add layers manually in autoencoder. If you want to build up your own, you will have start from the scratch by using layers provided by MATLAB;
In order to to use TrainNetwork(...) to train your model, you will have you find out a way to insert your data into an object called imDatastore. The difficulty for autoencoder's data is that there is NO label, which is required by imDatastore, hence you will have to find out a smart way to avoid it--essentially you are to deal with a so-called OCC (One Class Classification) problem.
https://www.mathworks.com/help/matlab/ref/matlab.io.datastore.imagedatastore.html
Use activations(...) to dump outputs from intermediate (hidden) layers
https://www.mathworks.com/help/deeplearning/ref/activations.html?searchHighlight=activations&s_tid=doc_srchtitle
I swang between using MATLAB and Python (Keras) for deep learning for a couple of weeks, eventually I chose the latter, albeit I am a long-term and loyal user to MATLAB and a rookie to Python. My two cents are that there are too many restrictions in the former regarding deep learning.
Good luck.:-)
If you 'simulation' means prediction/inference, simply use activations(...) to dump outputs from any intermediate (hidden) layers as I mentioned earlier so that you can check them.
Another way is that you construct an identical network but with the encoding part only, copy your trained parameters into it, and feed your simulated signals.

Object detection for a single object only

I have been working with object detection. But these methods consist of very deep neural networks and require lots of memory to store the trained models. E.g. I once tried to train a Mask R-CNN model, and the weights take 200 MB.
However, my focus is on detecting a single object only. So, I guess these methods are not suitable. Are there any object detection method that can do this job with a low memory requirement?
You can try SSD or faster RCNN they are easily available in Tensorflow object detection API
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
here you can get pre-trained models and config file
you can select your model by taking look on speed and mAP(accuracy) column as per your requirement.
Following mukul's answer, I specifically recommend you check out SSDLite-MobileNetV2.
It's a lite-weight model, which is still enough expressive for good results.
Especially when you're restricting yourself to a single class, as you can see in the example of FaceSSD-MobileNetV2 as in here (Note however this is vanilla SSD).
So you can simply Take the pre-trained model of SSDLite-MobileNetV2 with the corresponding config file, and modify it for a single class.
This means changing num_classes to 1, modifying the label_map.pbtxt, and of course - preparing the dataset with the single class you want.
If you want a more robust model, but which has no pre-trained mode, you can use an FPN version.
Checkout this config file, which is with MobileNetV1, and modify it for your needs (e.g. switching to MobileNetV2, switching to use_depthwise, etc).
On one hand, there's no detection pre-trained model, but on the other the detection head is shared over all (relevant) scales, so it's somewhat easier to train.
So simply fine-tune it from the corresponding classification checkpoint from here.

Evaluating neural networks built with Comp Graph dl4j

I am trying to build a complex neural network using Computation Graph implementation in Deeplearning4J. I need to have multiple outputs so that's why I can't go with the generic MultiLayerConfiguration.
However, my problem is that in this case I do not know how to do the evaluation of my model and I would like at least to know the accuracy.
Has anybody worked with Comp Graphs in dl4j?
First of all yes: tons of people use computation graph. They usually start from our existing examples though and tend to mainly use it for things like seq2seq.
As for your question on evaluation, it's conceptually the same as multi layer network. How you evaluate is likely going to be task specific though. If you think about where evaluation happens, it's always tied to a task (classification,regression,binary classification,..) with an output layer . In the most common case usually you only have 1 output which outputs a classification. In that case you can just use the first array it outputs.
Otherwise for multiple outputs..you'd have to define what you're evaluating. Usually tasks merge to 1 path.
If they don't, you'd have multiple output layers where you want to do an evaluation object per output.
Computation graphs and multi layer network both use a .output method to give you raw arrays. That is typically what you pass to eval.eval.

Using the GPU for parallel computing in Matlab

I've reached a stage where my arrays have become massive and a single function takes about 2 days to compute.
I am working with image processing and using kmeans and gmm - fitgmdist.
I have a workstation with Nvidia Tesla GPU's which are on the supported list and I would like to use their processing power to help speed up my work.
Looking into the documentation, I understand that in order to use the GPU functions all I have to do is to pass the array that is being passed to the functions to the GPU first. i.e.
model_feats = get_feats(all_imges);
kmeans = kmeans(model_feats, gaussians, 'EmptyAction','singleton', 'MaxIter',1000);
gmm{i} = fitgmdist(model_feats, 128, 'Options',statset('MaxIter',1000), ...
'CovType','diagonal', 'SharedCov',false, 'Regularize',0.01, 'Start',cInd);
All of my processing time is taken up by these two functions. So if I am to use the GPU cores, is all that I have to do is use the gpuArray function? For example the above will become:
temp_feats = get_feats(all_imges);
model_feats = gpuArray(temp_feats);
kmeans = kmeans(model_feats, gaussians, 'EmptyAction','singleton', 'MaxIter',1000);
gmm{i} = fitgmdist(model_feats, 128, 'Options',statset('MaxIter',1000), ...
'CovType','diagonal', 'SharedCov',false, 'Regularize',0.01, 'Start',cInd);
Will this work? Will it work for any function by first passing the array to gpuArray?
P.S. Sorry I have to ask here rather than just try it myself, but I do
not have access to the workstation as of now, but I can request access
to it. Before I request access to it I wanted to make sure if my
script will work with gpuArray.
Unfortunately, in short the answer to your question is No, it wont work.
The matlab GPU support is all but partial. The currently supported functions which accepts gpuArray inputs are given in: http://de.mathworks.com/help/distcomp/run-built-in-functions-on-a-gpu.html
So in my understanding, since kmeans is not in the list, it should not work. Someone please correct me if I'm wrong.
But on the other hand, if you do a google search, you can see 3rd party matlab implementations of kmeans on GPUs. Since I cant grantee the quality of the code, I wouldn't be posting a link.
Good luck!

create new event in output adapter streaminsight

I have the following problem in StreamInsight. I have a query where new tasks from an order came in and trigger an output adapter to make an prediction. The outputadapter writes the predicted task cycle time to a table (in Windows Azure). The prediction is based on neural networks and is plugged in in the outputadapter. After the prediction is written in the table I want to do something else with all the predicted times. So in a second query I want to count the number of written tasks in a time window of 5 minutes. When the number of predicted values saved in the table is equal to the number of tasks in an order, I want to get all the predicted values from the table and make a prediction of the order cycle time.
For this idea I need to make a new event in my outputadapter to know the predicted time is writen in the table. But I don't thinks its possible to enqueue new events in the streaminsight server from an outputadapter.
Maybe this figure makes the problem clear:
http://i40.tinypic.com/4h4850.jpg
Hope someone can help me.
Thanks Carlo
First off, I'm assuming you are using pre-2.1 StreamInsight based on your use of the term "output adapter".
From what you've posted, I would strongly recommend that your adapters do either input or output, but not both. This cuts down on the complexity, makes the implementation easier, and depending on how you wrote the adapter, you now have a reusable piece of code in your solution.
If you are wanting to send data from StreamInsight to your neural network prediction engine, you will need to write an output adapter to do that. Then I would create an input adapter that will get the results from the neural network prediction engine and enqueue the data into StreamInsight. After creating your stream from the neural network prediction engine input adapter, you can use dynamic query composition to share the stream to a Windows Azure storage output adapter and your next query.
If your neural network prediction engine can "push" data to your input adapter, that would be the way to do. If not, you'll have to poll for results.
There is a lot more to this, but it's difficult to drill in to more specifics without more details.
Hope this helps.