Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I need to plot discrete X,Y data as a MAP in a PNG or GIF file colour-coded to indicate discrete values.
All the Javascript Chart libraries I've seen do pie charts, line graphs, bar charts, etc, but NOT scatterplots.
Does anyone know the name of a library capable of scatterplots?
My current solution is to render the map as an HTML table (and screen-capture) where the cells are empty but have coloured backgrounds, depending on the discrete value. As you would expect, this is slow. Particularly when X and Y can take values 0 to 200, or more.
It also suffers from distortion when browsers choose cell size. Browsers decrease cell width across the page as they realise their original choice was too great. Circular maps end up looking egg-shaped and one end is more pointy than the other
How about this for ZingChart (http://www.zingchart.com/docs/chart-types/scatter/) Others do have it. The idea you used with a table solution seems a little more like a "heatmap" or some call piano style chart with different values per cell represented by intensity, image, etc. That isn't quite a scatter but that is in the library as well http://www.zingchart.com/docs/chart-types/piano/
You should be able to find equivalents in other libraries like HighCharts as well.
Disclosure: I work at ZingChart.
Please take a look at
Raphaël—JavaScript Library
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 months ago.
Improve this question
Whenever I make 2D games, I always attach the sprites under a canvas so that I can set the canvas scaling to be "Scale With Screen Size" so that the sprites scale with the users aspect ratio. Is this bad practice and is there a better way of doing this?
Unity shouldn't render objects off-screen, so in realtime there shouldn't be any problems with graphics performance.
From personal experience, making a game responsive for different resolutions is complicated and many techniques can be used to get good results.
I also use the "Scale with screen size" setting, and over the years I haven't found anything that works better.
The only detail I can give you for general performance: if you have many elements, perhaps animated, and they are not seen at all by the camera, I recommend that you disable them from scripts, because graphically they should not give problems, but they are always things that the engine calculates frame by frame and therefore if they are not essential it is better to disable them.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I wrote a C++ code where I perform edge detection on color or gray images (ppm files).
My code works well but is not as good as a specific Gimp plugin, especially when detecting more faint edges (low luminosity gradient). The plugin I am referring to is under Filters/Edge-Detect/Image Gradient in Gimp 2.10.8. Mouse hover says "Compute gradient magnitude and/or direction by central differences".
Below I embedded a gray test image to compare the results (i.e. gradient intensity), although my work is in color as well. The test image consists of some 13 circular rings with various luminosities (constant for each ring). The difference in luminosity between two adjacent rings increases from 2 luminosity units (for the inner rings) to 30 luminosity units (for the outer rings) in the outward radial direction.
As expected, the detected gradient is small for the inner rings and higher for the outer rings. The problem is that my C++ code is less sensitive to small gradients than the Gimp plugin is, as can be seen from the other two images below.
Where can I find the code for the Gimp Image Gradient plugin so I can learn something from it? I am not interested in the other Gimp plugins for edge detection (I verified that they are not quite as good as the Image Gradient, at least for my application).
It is in the GEGL package
File is operations/common/image-gradient.c
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Inside this pattern there is an image of a kettle. You can see it if you really focus. I'd like to know if I can use Matlab to decode this pattern? Give me some tips or code samples.
Warning
This answer will not attempt to solve this programatically ... Instead it focuses on letting Matlab do the eye crossing* for you however you still need to decide how far* to cross them...
* this terminology sounds wrong but I'm just going to go with it
Method
Simply shifting the image and subtracting it from the original should give reasonable results, choosing the shift however is the tricky part, but once you know it something as simple as imData-circshift(imData,[shiftY,shiftX,0]) should give a good image...
Here is a crude but simple GUI wrapper for the line of code above... (just run the function with an image file name as an argument)
It doesn't give great results for the given image but it works better on some of these
Example
Initial Image
After a little playing
I think "you can see it if you really focus" is not correct. In my oppinion you rather have to defocus to see the hidden image. You have to focus on a thought object that lies behind the image plan.
To extract it with matlab, I would suggest to try some stereo algorithms. Correlate each line of the image with itself and find repetitions. This is the same way our brain sees the hidden image.
This problem is a stereo vision problem: if you cross your eyes correctly you will see the object in 3D. i suggest you have a look at disparity maps: for instance the disparity matlab function (http://www.mathworks.co.uk/help/vision/ref/disparity.html)
computes a 3D image from a pair of 2D image.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have an app idea so mocked it up in C# because that's my most fluent language. I now want to port this over to Swift if possible, how hard will it be to generate a grid of 6x6 blocks, each block needs to be separate from each other as I need to change they're properties and detect touches on them. This is the grid I've currently got running on Windows.
Thanks for any help.
There are a lot of different ways to approach this problem, so you need to provide more details. You could do it with a single custom UIView, drawing the current representation of your model in the drawRect method and it would also be able to handle all of the touch events since you can just calculate where the user did the touch in the same way that you calculated drawing the grid and coloring the squares.
But if you want to use SpriteKit, then this tutorial will show you all the details of doing a 2D array, using sprites, tiles, etc.
http://www.raywenderlich.com/75270/make-game-like-candy-crush-with-swift-tutorial-part-1
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm implementing some piece of code in MATLAB to track left ventricle wall position in echocardiography images using a contour-based method. Unfortunately in some frames the contour evolves more than it is expected and in some regions the wall does not have good contrast.
Does anyone know a way to restrict contours from unexpected evolution from frame to frame saving both old frame's position and new one's shape?
Thank you all for helping.
Image segmentation is a hard problem. There is no general approach that works well in every situation. How are your contours being defined? Are you doing threshold-based segmentation, or using another approach? Have you tried transforming into a polar coordinate system in the centre of the LV? Have you tried quantifying some sort of 'least-squares' cost associated with moving the contour?
All I can suggest is look at how people solve similar problems. In my field (namely MRI), the best we have a) isn't really all that good, and b) is probably this open-source Matlab 'program' designed for cardiac segmentation called segment (see http://medviso.com/products/segment/features/cmr/). I suggest you look at how they do it, and see if you can adapt the method to work with the (much noisier, much harder to interpret) echo images.
Sorry I can't be more helpful!