I am working on a project where I want to look at a histogram of the orientations in an image. I used imgradient, but it seemed to pick up the very small orientations at each pixel (texture) and not the large, perceptually dominant orientations (shape) that you tend to notice in the image. How can I get a histogram of the orientations in the image at this larger spatial frequency?
Related
I am currently making a mobile match-3 like game in unity. I have made all the graphics for the gems(the objects with which you make the matches) in Inkscape at 256x256 and exported them(PNG Files) with 90 dpi(also tried with 360 but nothing changed). My problem is that when I run the game in the editor the graphics seem to be "pixelated" and blurry. In my sprite settings I've set Pixels per Unit to 256, checked Generate Mip Maps, I am using Bilinear Filter Mode and the aniso level is 0. I have also set the max size to 256 and compression to high quality(My Main Camera's size is 10 but I tried to change that and nothing changed as far as the quality of the sprites). What can I do to "perfectly" display my sprites? Do I have to export them in some other way from Inkscape or do I have to change some Unity's settings?
Thank you.
NOTE: My sprites are not "pixel art"!
Edit(Added photos of the purple gem as file and how it is shown in editor):
Because scaling
You're display resolution on the images isn't a 256x256 region where those images are displayed, which means that they must be scaled in some manner in order to display in the desired region. Camera rendering is notoriously bad at scaling. As your images aren't Vector (and Unity doesn't support vector graphic formats anyway), scaling will always result in a loss of detail. Detail like hard edges.
Your options are:
smaller images where you have complete control over how the image is scaled down
bilinear filtering (which is fundamentally blurry)
mipmaps (which are automatically scaled down versions of your image in powers of two)
If the later two aren't giving satisfactory results, your only option is the first.
I try to stereo calibrate 2 different cameras.
It turned out that images should be the same size. But in my case resolution is different (1920x1080 and 2048x1024).
I'm not sure that change resolution of images of one of cameras is a good idea.
What is the best solution in this situation? Should I change resolution, if yes for what images (reduce big resolution to smaller or vice versa)?
I'm working on stereo vision and I am using data from The KITTI Vision Benchmark Suite, The calibration parameters they use are very different from the parameters that the stereo camera calibrator toolbox produce and I couldn't find a way to use their data in MATLAB.
I also tried to use their calibration images in stereo camera calibrator toolbox which looks like this:
Calibration Toolbox MATLAB
But when I run the calibration it fails saying
Calibration fails:
unable to estimate camera parameters. images may contain severe less
distortion or the 3-D orientations of the calibration pattern are too
similar across images, if calibration pattern orientations are too
similar, try removing very similar images or adding additional images
with pattern in invalid orientations
Please help me if you have any idea on how to solve this
In the past I always see that the property called position and positionInPixels are the same. This time position * 2 == positionInPixels. Who can tell me what's the difference between these 2 properties or when their value will be different?
position is in points and positionInPixels is in pixels. On non-retina device 1 point = 1 pixel. On retina device like iPhone 4/4S and the New iPad, 1 point = 2 pixels.
Per iOS Human Interface Guidelines:
Note: Pixel is the appropriate unit of measurement to use when discussing the size of a device screen or the size of an icon you create in an image-editing application. Point is the appropriate unit of measurement to use when discussing the size of an area that is drawn onscreen.
On a standard-resolution device screen, one point equals one pixel, but other resolutions might dictate a different relationship. On a Retina display, for example, one point equals two pixels.
See "Points Versus Pixels" in View Programming Guide for iOS for a complete discussion of this concept.
position gives you a position in points, whereas positionInPixels gives you the position in pixels. On an iPhone 4, e.g., position can range from (0,0) to (320, 480) (in portrait mode); positionInPixels can range from (0,0) to (640, 960) to reflect the higher resolution its retina display.
Basically, they are different on retina display devices; they are the same on non retina display devices.
Hope this helps...
When you use Retina display your canvas still consists of 320x480 points but each point is composed of 2 pixels. In the standard display each point is one pixel. This is why retina display is more detailed as more pixel data can be used in textures. So position in pixels refers to the position on a specific pixel. (i.e. point 0 can be pixel 0 or pixel 1 in high retina display)
I would like to upgrade my 3D application with some high res assets, for iPhone 4.
I can't update the whole graphic content of my app. I want to mix images in high and low resolution.
All my application is rendered with OpenGL
The most part of my app is based on billboard sprites, so I can change the scale factor of my OpenGL view but I will have to scale all my low res sprites and update their positions.
Do you have another way to do this by changing as little code as possible?
In my project I set all sizes and positions for 320x480 screen. And when running on retina device and using x2 texture I multiply its dimensions x0.5 after loading it (for example texture has 100 pixels width, but width value will be 50).