I am working on developing a shape descriptor module for crack clustering of binary cracks. Long story short, I have engineered several features. To arrange my feature vector for images that contain several cracks, as the biggest crack normally (not always) defines the crack type, I weigh the features based on the area each crack occupies (the number of pixels) and then, take the average.
Take as an example the below image:
enter image description here
The horizontal one decides that this crack image is a transverse crack and at the same time, I can't ignore the one in the corner.
My question is that what is the best way to deal with this? I feel like my feature vector should be multi-dimensional to be able to cover the multi-crack ones, but I haven't seen a problem like this.
Note 1: refining the images and removing the small ones is not an option cuz in some other types, their presence tells a different story. Example:
enter image description here
Here the little ones define that this crack is a meandering one.
Note 2: From an engineering point of view, the simpler the better.
Note 3: The number of cracks in one image can go up to 18, and at the same time, the majority of my data contain one crack.
Related
I am coding a spell-casting system where you draw a symbol with your wand (mouse), and it can recognize said symbol.
There are two methods I believe might work; neural networking and an "invisible grid system"
The problem with the neural networking system is that It would be (likely) suboptimal in Roblox Luau, and not be able to match the performance nor speed I wish for. (Although, I may just be lacking in neural networking knowledge. Please let me know whether I should continue to try implementing it this way)
For the invisible grid system, I thought of converting the drawing into 1s and 0s (1 = drawn, 0 = blank), then seeing if it is similar to one of the symbols. I create the symbols by making a dictionary like:
local Symbol = { -- "Answer Key" shape, looks like a tilted square
00100,
01010,
10001,
01010,
00100,
}
The problem is that user error will likely cause it to be inaccurate, like this "spell"'s blue boxes, showing user error/inaccuracy. I'm also sure that if I have multiple Symbols, comparing every value in every symbol will surely not be quick.
Do you know an algorithm that could help me do this? Or just some alternative way of doing this I am missing? Thank you for reading my post.
I'm sorry if the format on this is incorrect, this is my first stack-overflow post. I will gladly delete this post if it doesn't abide to one of the rules. ( Let me know if there are any tags I should add )
One possible approach to solving this problem is to use a template matching algorithm. In this approach, you would create a "template" for each symbol that you want to recognize, which would be a grid of 1s and 0s similar to what you described in your question. Then, when the user draws a symbol, you would convert their drawing into a grid of 1s and 0s in the same way.
Next, you would compare the user's drawing to each of the templates using a similarity metric, such as the sum of absolute differences (SAD) or normalized cross-correlation (NCC). The template with the lowest SAD or highest NCC value would be considered the "best match" for the user's drawing, and therefore the recognized symbol.
There are a few advantages to using this approach:
It is relatively simple to implement, compared to a neural network.
It is fast, since you only need to compare the user's drawing to a small number of templates.
It can tolerate some user error, since the templates can be designed to be tolerant of slight variations in the user's drawing.
There are also some potential disadvantages to consider:
It may not be as accurate as a neural network, especially for complex or highly variable symbols.
The templates must be carefully designed to be representative of the expected variations in the user's drawings, which can be time-consuming.
Overall, whether this approach is suitable for your use case will depend on the specific requirements of your spell-casting system, including the number and complexity of the symbols you want to recognize, the accuracy and speed you need, and the resources (e.g. time, compute power) that are available to you.
I have a situation where I have many images, and I compare them using a specific fuzz factor (say 10%), looking for images that match. Works fine.
However, I sometimes have a situation where I want to compare all images to all other images (for e.g. 1000 images). Doing 5000+ ImageMagick compares is way too slow.
Hashing all the files and comparing the hashes 5000 times is lightning fast, but of course only works when the images are identical (no fuzz factor).
I'm wondering if there is some way to produce an ID or fingerprint - or maybe a range of IDs - where I could very quickly determine what images are close enough to each other, and then pay the ImageMagick compare cost only for those likely matches. Ideas or names of existing algorithms/approaches are very welcome.
There are quite a few imaging hashing algorithms out there. pHash is the one that springs to the top of my mind. http://www.phash.org/. That one works with basic transformations that one might want to do on an image. If you want to be more sophisticated and roll your own, you can use a pre-trained image classifier like image net (https://www.learnopencv.com/keras-tutorial-using-pre-trained-imagenet-models/), lop off the final layer, and use the penultimate layer as a vector. For small # of images, you can easily do a nearest neighbor. If you have more, you cam use annoy (https://github.com/spotify/annoy) to make the nearest neighbor search a bit more efficient
In Word2Vector, the word embeddings are learned using co-occurrence and updating the vector's dimensions such that words that occur in each other's context come closer together.
My questions are the following:
1) If you already have a pre-trained set of embeddings, let's say a 100 dimensional space with 40k words, can you add 10 additional words onto this embedding space without changing the existing word embeddings. So you would only be updating the dimensions of the new words using the existing word embeddings. I'm thinking of this problem with respect to the "word 2 vector" algorithm, but if people have insights on how GLoVe embeddings work in this case, I am still very interested.
2) Part 2 of the question is; Can you then use the NEW word embeddings in a NN that was trained with the previous embedding set and expect reasonable results. For example, if I had trained a NN for sentiment analysis, and the word "nervous" was previously not in the vocabulary, then would "nervous" be correctly classified as "negative".
This is a question about how sensitive (or robust) NN are with respect to the embeddings. I'd appreciate any thoughts/insight/guidance.
The initial training used info about known words to plot them in a useful N-dimensional space.
It is of course theoretically possible to then use new information, about new words, to also give them coordinates in the same space. You would want lots of varied examples of the new words being used together with the old words.
Whether you want to freeze the positions of old words, or let them also drift into new positions based on the new examples, could be an important choice to make. If you've already trained a pre-existing classifier (like a sentiment classifier) using the older words, and didn't want to re-train that classifier, you'd probably want to lock the old words in place, and force the new words into compatible positioning (even if the newer combined text examples would otherwise change the relative positions of older words).
Since after an effective train-up of the new words, they should generally be near similar-meaning older words, it would be reasonable to expect classifiers that worked on the old words to still do something useful on the new words. But how well that'd work would depend on lots of things, including how well the original word-set covered all the generalizable 'neighborhoods' of meaning. (If the new words bring in shades of meaning of which there were no examples in the old words, that area of the coordinate-space may be impoverished, and the classifier may have never had a good set of distinguishing examples, so performance could lag.)
Steganography link shows a demonstration of steganography. My question is when the number of bits to be replaced, n =1, then the method is irreversible i.e the Cover is not equal to Stego (in ideal and perfect cases the Cover used should be identical to the Steganography result). It only works perfectly when the number of bits to be replaced is n=4,5,6!! When n=7, the Stego image becomes noisy and different from the Cover used and the result does not become inconspicuous. So, it is evident that there has been an operation of steganography. Can somebody please explain why that is so and what needs to be done so as to make the process reversible and lossless.
So let's see what the code does. From the hidden image you extract the n most significant bits (MSB) and hide them in the n least significant bits (LSB) in the cover image. There are two points to notice about this, which answer your questions.
The more bits you change in your cover image, the more distorted your stego image will look like.
The more information you use from the hidden image, the closer the reconstructed image will look to the original one. The following link (reference) shows you the amount of information of an image from the most to the least significant bit.
If you want to visually check the difference between the cover and stego images, you can use the Peak Signal-to-Noise-Ratio (PSNR) equation. It is said the human eye can't distinguish differences for PSNR > 30 dB. Personally, I wouldn't go for anything less than 40 but it depends on what your aim is. Be aware that this is not an end-all, be-all type of measurement. The quality of your algorithm depends on many factors.
No cover and stego images are supposed to be the same. The idea is to minimise the differences so to resist detection and there are many compromises to achieve that, such as the size of the message you are willing to hide.
Perfect retrieval of a secret image requires hiding all the bits of all the pixels, which means you can only hide a secret 1/8th of the cover image size. Note though that this is worst case scenario, which doesn't consider encryption, compression or other techniques. That's the idea but I won't provide a code snippet based on the above because it is very inflexible.
Now, there are cases where you want the retrieval to be lossless, either because the data are encrypted or of sensitive nature. In other cases an approximate retrieval will do the job. For example, if you were to encode only the 4 MSB of an image, someone extracting the secret would still get a good idea of what it initially looked like. If you still want a lossless method but not the one just suggested, you need to use a different algorithm. The choice of the algorithm depends on various characteristics you want it to have, including but not restricted to:
robustness (how resistant the hidden information is to image editing)
imperceptibility (how hard it is for a stranger to know the existence of a secret, but not necessarily the secret itself, e.g. chi-square attack)
type of cover medium (e.g., specific image file type)
type of secret message (e.g., image, text)
size of secret
I have a picture.1200*1175 pixel.I want to train a net(mlp or hopfield) to learn a specific part of it(201*111pixel) to save its weight to use in a new net(with the same previous feature)only without train it to find that specific part.now there are this questions :what kind of nets is useful;mlp or hopfield,if mlp;the number of hidden layers;the trainlm function is unuseful because "out of memory" error.I convert the picture to a binary image,is it useful?
What exactly do you need the solution to do? Find an object with an image (like "Where's Waldo"?). Will the target object always be the same size and orientation? Might it look different because of lighting changes?
If you just need to find a fixed pattern of pixels within a larger image, I suggest using a straightforward correlation measure, such as crosscorrelation to find it efficiently.
If you need to contend with any of the issues mentioned above, then there are two basic solutions: 1. Build a model using examples of the object in different poses, scalings, etc. so that the model will recognize any of them, or 2. Develop a way to normalize the patch of pixels being examined, to minimize the effect of those distortions (like Hu's invariant moments). If nothing else, yuo'll want to perform some sort of data reduction to get the number of inputs down. Technically, you could also try a model which is invariant to rotations, etc., but I don't know how well those work. I suspect that they are more tempermental than traditional approaches.
I found AdaBoost to be helpful in picking out only important bits of an image. That, and resizing the image to something very tiny (like 40x30) using a Gaussian filter will speed it up and put weight on more of an area of the photo rather than on a tiny insignificant pixel.