How to subtract a single value of every voxel in an MRI image? - imaging

I have to ask a technical question about FLS, maybe someone can help me with that.
I have ADC maps from a DWI MRI image of a patient. Now I want to extract a single value from every voxel of this ADC map and generate a "new" ADC image where I can then extract features from.
Is this possible with fsl or flsmaths? This process step should act as something like normalization.
Thank you for your help in advance!

Related

Raw data acquisition with USB microphone

I'm a senior grade student in mechanical engineering. I study on IET(Impulse excitation technique). I need to do a test setup to find Young's Modulus, Shear Modulus and Poission ratio with this technique. The product link is given below that I would like to build. In this methot,after fixing metal on nodes points,it is hit by hammer.Then sound is recorded and processed. I built a prototype of this product but I'm using a low grade microphone(approx. 10 dollar). I use a simple FFT method to find the frequency.
https://www.imce.eu/products/rfda-basic
Here is my questions. I need to find the frequency of the metal sample to calculate coefficients mentioned above. I detected frequency value with simple matlab code. Could these be correct values? Can the filters in the sound card inside my computer change these values? I have found usb microphones that can measure between 50Hz-50khz and these values match the range of values written in standards(ASTM-1876 - IET standard). Just changing my microfone will help me get the right result? Could I get the raw sound data with USB microphone? Or do I need to use DAQ card and compatible microphone.
I hope I can explain these in good way.
If you have any experience about it, I would be glad to share it with me.
Thanks for helping.

How can I find a cameras sensor size in matlab

I am doing a university project using mat-lab and a webcam.
My equations require the knowledge of my camera's sensor size, is there any way to calculate this value in mat-lab?
I am stuck with problem for four days any help will be appreciated
thanks in advance
I think you can find what you need for your equation by extracting the camera parameters. They are some nice implemented functions in matlab to do so.
You can use some functions directly after taking several pictures of a checkerboard with your webcam
http://uk.mathworks.com/help/vision/geometric-camera-calibration.html
or the app
http://uk.mathworks.com/videos/camera-calibration-with-matlab-81233.html
most images contains a meta-data structure called EXIF. Learn more about EXIF.
Generally the EXIF structure contains all sensor and image related information: sensor-size, ...
To extract the EXIF structure from an image using Matlab use the exifread function, which returns all EXIF tags.
output = exifread(filename)
Then process it to extract the sensor size.
Learn more about exifread function

Mapping Vision Outputs To Neural Network Inputs

I'm fairly new to MATLAB, but have acquainted myself with Simulink and Computer Vision over the past few days. My problem statement involves taking a traffic/highway video input and detecting if an accident has occurred.
I plan to do this by extracting the values of centroid to plot trajectory, velocity difference (between frames) and distance between two vehicles. I can successfully track the centroids, and aim to derive the rest of the features.
What I don't know is how to map these to ANN. I mean, every image has more than one vehicle blobs, which means, there are multiple centroids in a single frame/image. So, how does NN act on multiple inputs (the extracted features per vehicle) simultaneously? I am obviously missing the link. Help me figure it out please.
Also, am I looking at time series data?
I am not exactly sure about your question. The problem can be both time series data and not. You might be able to transform the time series version of the problem, such that it can be solved using ANN, but it is sort of a Maslow's hammer :). Also, Could you rephrase the problem.
As you said, you could give it features from two or three frames and then use the classifier to detect accident or not, but it might be difficult to train such a classifier. The problem is really difficult and the so you might need tons of training samples to get it right, esp really good negative samples (for examples cars travelling close to each other) etc.
There are multiple ways you can try to solve this problem of accident detection. For example : Build a classifier (ANN/SVM etc) to detect accidents without time series data. In which case your input would be accident images and non accident images or some sort of positive and negative samples for training and later images for test. In this specific case, you are not looking at the time series data. But here you might need lots of features to detect the same (this in some sense a single frame version of the problem).
The second method would be to use time series data, in which case you will have to detect the features, track the features (say using Lucas Kanade/Horn and Schunck) and then use the information about velocity and centroid to detect the accident. You might even be able to formulate it for HMMs.

Ideas for extracting features of an object using keypoints of image

I'll be appreciated if you help me to create a feature vector of an simple object using keypoints. For now, I use ETH-80 dataset, objects have an almost blue background and pictures are took from different views. Like this:
After creating a feature vector, I want to train a neural network with this vector and use that neural network to recognize an input image of an object. I don't want make it complex, input images will be as simple as train images.
I asked similar questions before, some one suggested using average value of 20x20 neighborhood of keypoints. I tried it, It seems it's not working with ETH-80 images, because of different views of images. It's why I asked another question.
SURF or SIFT. Look for interest point detectors. A MATLAB SIFT implementation is freely available.
Update: Object Recognition from Local Scale-Invariant Features
SIFT and SURF features consist of two parts, the detector and the descriptor. The detector finds the point in some n-dimensional space (4D for SIFT), the descriptor is used to robustly describe the surroundings of said points. The latter is increasingly used for image categorization and identification in what is commonly known as the "bag of word" or "visual words" approach. In the most simple form, one can collect all data from all descriptors from all images and cluster them, for example using k-means. Every original image then has descriptors that contribute to a number of clusters. The centroids of these clusters, i.e. the visual words, can be used as a new descriptor for the image. The VLfeat website contains a nice demo of this approach, classifying the caltech 101 dataset:
http://www.vlfeat.org/applications/apps.html#apps.caltech-101

Optical character recognition program for photographs

I need to develop an optical character recognition program in Matlab (or any other language that can do this) to be able to extract the reading on this photograph.
The program must be able to upload as many picture files as possible since I have around 40000 pictures that I need to work through.
The general aim of this task is to record intraday gas readings from the specific gas meter shown in the photograph. The is a webcam currently setup that is programmed to photgraph the readings every minute and so the OCR program would help in then having historic intraday gas reading data.
Which is the best software to do this in and are there any online sources that are available for this??
I'd break down the basic recognition steps as follows:
Locate meter display within the image
Isolate and clean up the digits
Calculate features
Classify each digit using a model you've trained using historic examples
Assuming that the camera for a particular location does not move, step 1 will only need to be performed once. Step 2 will include things like enhancing contrast and filtering noise. Step 3 can include any useful calculations you can think of, such as mean and skew of "ink" (white) pixels. Step 4 would utilize a model you build to classify a single digit as '0', '1', ... '9', and could be accomplished using k-nearest neighbors, logistic regression, SVM, neural network, etc.
A couple of things would make 1 in Predictor's answere easy: Placing the cam directly above the meter, adding sufficient light, maybe placing bright pink strips around the meter to help segment out the display :).
Once you do this, and the cam remains fixed, you can use a manual process once and then have it applied to all subsequent images to segment out the digits. If the lighting is good and consistent, you might just be able to use simple template matching to identify each of the segmented digits.
Actually, once you get a sample of all the digits, you might even be able to classify them on something simpler (like sum of thresholded pictures).
In recently, there is many object detect method can be used to deal with this problem.