I want to get depth data( not disparity ) from iPhone7 Plus
Now, I refer to a sample source “AVCamPhotoFilter”.
And, now I can probably get disparity data.
But, I don’t know how to get depth data.
So, According to the reference about AVDepthData,
depthDataMap is the depth data or disparity data.
If I want to get depth data,
I need to set activeDepthDataFormat or calculate using the disparity Data ?
How to set activeDepthDataFormat ?
Or
How to calculate ?
I'm looking forward to hearing from you.
If you haven't yet, check out: https://developer.apple.com/videos/play/wwdc2017/507/
To calculate: Check out the video at 11:00
To get the data directly, you want to use
func converting(toDepthDataType depthDataType: OSType)
and use one of the data types at 14:20 in the video, which are:
kCVPixelFormatType_DisparityFloat16
kCVPixelFormatType_DisparityFloat32
kCVPixelFormatType_DepthFloat16
kCVPixelFormatType_DepthFloat32
Edited to include data type names.
Related
I am simulating the case "Cavity driven lid" and I try to get all the stream lines with the stream tracer of paraview, but I only get the ones that intersect the reference line, and because of that there are vortices that are not visible. How can I see all the stream-lines in the domain?
Thanks a lot in adavance.
To add a little bit to Mathieu's answer, if you really want streamlines everywhere, then you can create a Stream Tracer With Custom Source (as Mathieu suggested) and set your data to both the Input and the Seed Source. That will create a streamline originating from every point in your dataset, which is pretty much what you asked for.
However, while you can do this, you will probably not be happy with the results. First of all, unless your data is trivially small, this will take a long time to compute and create a large amount of data. Even worse, the result will be so dense that you won't be able to see anything. You will get all those interesting streamlines through vortices, but they will be completely hidden by all the boring streamlines around them.
Thus, you are better off with trying to derive a data set that contains seed points that are likely to trace a stream through the vortices that you are interested in. One thing you might want to try is to compute the vorticity of your vector field (Gradient Of Unstructured Data Set when turning on advanced option Compute Vorticity), find the magnitude of that (Calculator), and then use the Threshold filter to pull out the cells with large vorticity. Then use that as your Seed Source.
Another (probably better) option if your data is 2D or you can extract an interesting surface along the flow of your data is to use the Surface LIC plugin. Details can be found at https://www.paraview.org/Wiki/ParaView/Line_Integral_Convolution.
You have to choose a representative source for your streamline.
You could use a "Sphere Source", so in the StreamTracer properties.
If that fails, you can use a StreamTracerWithCustomSource and use your own source that you will have to create yourself first.
I would like to know if there are some libraries/algorithms/techniques that help to extract the user context (walking/standing) from accelerometer data (extracted from any smartphone)?
For example, I would collect accelerometer data every 5 seconds for a definite period of time and then identify the user context (ex. for the first 5 minutes, the user was walking, then the user was standing for a minute, and then he continued walking for another 3 minutes).
Thank you very much in advance :)
Check new activity recognization apis
http://developer.android.com/google/play-services/location.html
its still a research topic,please look at this paper which discuss the algorithm
http://www.enggjournals.com/ijcse/doc/IJCSE12-04-05-266.pdf
I don't know of any such library.
It is a very time consuming task to write such a library. Basically, you would build a database of "user context" that you wish to recognize.
Then you collect data and compare it to those in the database. As for how to compare, see Store orientation to an array - and compare, the same holds for accelerometer.
Walking/running data is analogous to heart-rate data in a lot of ways. In terms of getting the noise filtered and getting smooth peaks, look into noise filtering and peak detection algorithms. The following is used to obtain heart-rate information for heart patients, it should be a good starting point : http://www.docstoc.com/docs/22491202/Pan-Tompkins-algorithm-algorithm-to-detect-QRS-complex-in-ECG
Think about how you want to filter out the noise and detect peaks; the filters will obviously depend on the raw data you gather, but it's good to have a general idea of what kind of filtering you'd want to do on your data. Think about what needs to be done once you have filtered data. In your case, think about how you would go about designing an algorithm to find out when the data indicates activity (like walking, running,etc.), and when it shows the user being stationary. This is a fairly challenging problem to solve, once you consider the dynamics of the device itself (how it's positioned when the user is walking/running), and the fact that there are very few (if not no) benchmarked algos that do this with raw smartphone data.
Start with determining the appropriate algorithms, and then tackle the complexities (mentioned above) one by one.
I am working with Zillow neighborhood data provided freely at http://www.zillow.com/howto/api/neighborhood-boundaries.htm . I have successfully Imported the data with SRID 4120. Now I am trying to find out the neighborhoods by giving a coordinate(lat,long) and a radius. Finding a neighborhood in which my point exists is easy and is done through STIntersect method. I am actually confused with STDistance. For complete WA state data, It is giving me a maximum distaince of 4.xxx relative to any point in the wa. My question is what is the good way to find the points which are in a given radius and what is the unit.
thanx
zAfar
Got it, I was importing geography data into geometry column.
I tried to use a clear title. What I try to achieve is that I have a list of data as below
ID - ID of people, not important in calculation, but need for output to determine the person
Education {1=Degree, 2=Master, 3=PhD}
CGPA - value from 2.00 until 4.00
Computer = {1=Yes, 0 = No} (Computer knowledge)
Oversea = {1 = Yes, 0 = No} (willing to travel oversea)
ID,Education,CGPA,Computer,Oversea
001,3,3.14,1,0
002,1,3.68,1,1
003,2,2.76,0,1
..........
.........
Say I have 1,000 rows with different values. My purpose is, I want to give similar 1 row of data and get the closest record out of 1,000 rows. I am using WEKA.
I am trying to do something like finding the best resume for a particular job.
I have checked and did many examples to understand better about WEKA, but I just cant get it done. I am new to WEKA. I tried classifiers and decision trees but couldnt. I am able to get the prediction out of given data, but I cannot filter data list according to given input.
Any help much appreciated. Any link that directs me to any article about this, or any idea or even any single sparkle will be useful.
Sounds like you want to use a nearest neighbour classifier (IBk in Weka). If you're using the Weka GUI, you can only get the class, so you'll have to implement some code to retrieve the actual nearest neighbour.
Have a look at this question for a way of doing this.
I have a picture.1200*1175 pixel.I want to train a net(mlp or hopfield) to learn a specific part of it(201*111pixel) to save its weight to use in a new net(with the same previous feature)only without train it to find that specific part.now there are this questions :what kind of nets is useful;mlp or hopfield,if mlp;the number of hidden layers;the trainlm function is unuseful because "out of memory" error.I convert the picture to a binary image,is it useful?
What exactly do you need the solution to do? Find an object with an image (like "Where's Waldo"?). Will the target object always be the same size and orientation? Might it look different because of lighting changes?
If you just need to find a fixed pattern of pixels within a larger image, I suggest using a straightforward correlation measure, such as crosscorrelation to find it efficiently.
If you need to contend with any of the issues mentioned above, then there are two basic solutions: 1. Build a model using examples of the object in different poses, scalings, etc. so that the model will recognize any of them, or 2. Develop a way to normalize the patch of pixels being examined, to minimize the effect of those distortions (like Hu's invariant moments). If nothing else, yuo'll want to perform some sort of data reduction to get the number of inputs down. Technically, you could also try a model which is invariant to rotations, etc., but I don't know how well those work. I suspect that they are more tempermental than traditional approaches.
I found AdaBoost to be helpful in picking out only important bits of an image. That, and resizing the image to something very tiny (like 40x30) using a Gaussian filter will speed it up and put weight on more of an area of the photo rather than on a tiny insignificant pixel.