I am working on a project about semantic segmentation in medical concepts where each image has multiple masks and each mask shows a different disease in the human eye, now I don't exactly know how to train my model with multiple masks. Should I stack the masks first or i will lose the information about different diseases, and how should I feed this information to my model?
Before you train your model, you need first to think how you want it to be used afterward. At test time, how many masks do you want the model to output per input image? If you want your system to output multiple masks, then you should train a network with multiple binary output "heads". Alternatively, if you want the model to output a single mask with a different label per disease, then you should train a model with a single classification head, but with multiple classes per pixel.
Related
If I want to train an object-detecion model which can detect 5 classes in the picture, Is it important to pre-train this model in a large dataset like coco(80 categories of objects),or just take 5 categories of coco to pre-train this model(assuming these 5 categories can be found in coco)?
If the 5 classes that you want to detect are already in the MS-COCO dataset, there are two possible options
Use the existing object detection model that was pretrained on the MS-COCO dataset. If the detections are satisfactory, great and you can continue using it
If not, you can finetune the model on data containing your classes of interest, basically use the pretrained MS-COCO weights as a starting point for training the network on your data that consists of those 5 classes (the more data, the better)
Now, if the classes that you wish to detect are not in the original MS-COCO dataset, you will be much better off by using the pretrained MS-COCO weights (trained on 80 classes even if they are not relevant to yours) in the early convolutional layers and then train the detection and deeper layers of the network on your dataset. This is because the low-level features (like edges, blobs, etc.) that the network has learned will mostly be common to all classes and will greatly speed up training
I cropped the following image from a tutorial.
this diagram shows a rough structure of a standard neural network. takes one image as input and make a prediction.
what I am thinking about is some kind of parallel structure. think about something like the following image.
not exactly as in the above image. But you can see I am trying to use two images to make one prediction. this image is for you to get an idea about what I am trying to ask.
is it possible to use more than one (two, three ..) images like this or any other way in order to make one prediction. now, this is not to be used in actual photo classification. But I think such a technique can be used in a file like audio classification where a graphical representation of data is used with image classification techniques.
any advice, guidance or opinion on this?
if we consider implementing exactly what is in the diagram, if I use a high-level API like Keras (Keras.model.sequential) all we can do is keep adding a layer one after the other.
so what kind of technology can I use to implement the parallel structure
Yes, you can use more than one image as input. See for example the Siamese Neural Network which takes as input 2 images and passes them through a shared network architecture.
If instead you want to have an arbitrary and variable number of images as input you can use an architecture based on Recurrent Neural Networks like Convolutional LSTM, which essentially applies a CNN to every image of the input sequence using an LSTM recurrent network.
I have data-set regarding chocolates. I need to detect whether it has scratches or not. I am planning to detect from Convolution Neural Network using Caffe. But how to define which neural network architecture will suit to my data-set?
Also how to generate heat values when there is any scratches in image?
I have tried detect normal image processing algorithms and it did not work.
Abnormal Image
Normal Image
Based on the little info you provide, the network architecture choice should be the last of your concerns. Also "trying normal image processing algorithms" is quite a vague statement.
A few points to consider
How big is the dataset? Are the chocolate photos taken in a controlled setting where they are always similar to your example photos or are they taken in the wild, i.e. where they could have different lighting conditions, positions, etc.? Is the dataset balanced?
How is the dataset labelled? Is it just a class for the whole image specifying normal vs abnormal? If so, you'd just be doing classification, and one way to potentially just visualise the location of the scratches (if they turn out to be the most prominent feature for the classification) is to use gradient-weighted class activation maps. On the other hand, if your dataset has labelled scratch points over images, then you can directly train your network to output heatmaps.
Once your dataset is properly set up with a training and validation set, you can just start with a baseline simple small convolutional network architecture, and then you can try out different and bigger network architectures like VGG16, ResNet, etc., and check whether they improve performance on your validation set.
I want to use pre-trained model for the face identification. I try to use Siamese architecture which requires a few number of images. Could you give me any trained model which I can change for the Siamese architecture? How can I change the network model which I can put two images to find their similarities (I do not want to create image based on the tutorial here)? I only want to use the system for real time application. Do you have any recommendations?
I suppose you can use this model, described in Xiang Wu, Ran He, Zhenan Sun, Tieniu Tan A Light CNN for Deep Face Representation with Noisy Labels (arXiv 2015) as a a strating point for your experiments.
As for the Siamese network, what you are trying to earn is a mapping from a face image into some high dimensional vector space, in which distances between points reflects (dis)similarity between faces.
To do so, you only need one network that gets a face as an input and produce a high-dim vector as an output.
However, to train this single network using the Siamese approach, you are going to duplicate it: creating two instances of the same net (you need to explicitly link the weights of the two copies). During training you are going to provide pairs of faces to the nets: one to each copy, then the single loss layer on top of the two copies can compare the high-dimensional vectors representing the two faces and compute a loss according to a "same/not same" label associated with this pair.
Hence, you only need the duplication for the training. In test time ('deploy') you are going to have a single net providing you with a semantically meaningful high dimensional representation of faces.
For a more advance Siamese architecture and loss see this thread.
On the other hand, you might want to consider the approach described in Oren Tadmor, Yonatan Wexler, Tal Rosenwein, Shai Shalev-Shwartz, Amnon Shashua Learning a Metric Embedding for Face Recognition using the Multibatch Method (arXiv 2016). This approach is more efficient and easy to implement than pair-wise losses over image pairs.
So here is there setup, I have a set of images (labeled train and test) and I want to train a conv net that tells me whether or not a specific object is within this image.
To do this, I followed the tensorflow tutorial on MNIST, and I train a simple conv net reduced to the area of interest (the object) which are training on image of size 128x128. The architecture is as follows : successively 3 layers consisting of 2 conv layers and 1 max pool down-sampling layers, and one fully connected softmax layers (with two class 0 and 1 whether the object is present or not)
I impleted it using tensorflow, and this works quite well, but since I have enough computing power I was wondering how I could improve the complexity of the classification:
- adding more layers ?
- adding more channel at each layer ? (currently 32,64,128 and 1024 for the fully connected)
- anything else ?
But the most important part is that now I want to detect this same object on larger images (roughle 600x600 whereas the size of the object should be around 100x100).
I was wondering how I could use the previously training "small" network used for small images, in order to pretrained a larger network on the large images ? One option could be to classify the image using a slicing window of size 128x128 and scan the whole image but I would like to try if possible to train a whole network on it.
Any suggestion on how to proceed ? Or an article / ressource tackling this kind of problem ? (I am really new to deep learning so sorry if this is stupid question...)
Thanks !
I suggest that you continue reading on the field overall. Your search keys include CNN, image classification, neural net, AlexNet, GoogleNet, and ResNet. This will return many articles, on-line classes and lectures, and other materials to help you learn about classification with neural nets.
Don't just add layers or filters: the complexity of the topology (net design) must be fitted to the task; a net that's too complex will over-fit the training data. The one you've been using is probably LeNet; the three I cite above are for the ImageNet image classification contest.
Since you are working on images, I would suggest you to use a pretrained image classification network (like VGG, Alexnet etc.)and fine tune this network with your 128x128 image data. In my experience until we have very large data set fine tuned network will give more accuracy and also save training time. After building a good image classifier on your data set you can use any popular algorithm to generate region of proposal from the image. Now take all regions of proposal and pass them to classification network one by one and check weather this network is classifying given region of proposal as positive or negative. If it classifying as positively then most probably your object is present in that region. Otherwise it's not. If there are a lot of region of proposal in which object is present according to classifier then you can use non maximal suppression algorithms to reduce number of positive proposals.