On the web, google documentation and other documents I can find only information related to its implementation.
I'd like to know how does it exactly work. For example what sensors is it using, how does the machine learning thing was done, whats the measurement error probability etc.
Are those infos restricted or just hard to find?
Related
I want to try out Micrpopython (as a beginner) and I want some tips for some easy tutorials to get started. What I have in mind of something fun, to begin with, would be a temperature sensor that activates a LED light when a certain temperature is exceeded (or other type of warning).
Any tips?
A good start will be through the official doc, They provided you with all the information needed to use it, whether you want to use esp32 or esp8266 and if you need more information about coding or anything else you can visit GitHub here, there's also a lot of courses on Udemy if you want, but for me, GitHub was great.
currently I'm working on a BLE project based on Ionic (#ionic-native/ble). I've asked couple of questions before and finally managed to write an entry-level app. But now it's come to another problem, which is that I don't know how to work with the data provided by the device.
Okay so first, my code is based on this guy's work: (Thank you, Don, btw)https://github.com/don/ionic-ble-examples/tree/master/connect
And here is the demo:
As you can see, I have a fully functional estimote beacon and I'm required to get the minor, major, ID from that beacon. The problem is that in the second image I can't see any attributes that are related to the three above and furthermore, I don't know what to do with the bunch of information I get after connecting. So my question is, what to do after connecting to a BLE device, and, can anyone suggest me some good, for-dummies documentation that I can read to understand the meaning of those creepy strings of data? All the ionic-ble-tutorials that I found are outdated and documents about BLE are extremely hard to understand.
Characteristics are what you want to interact with whenever you are trying to read, write or subscribe for data. Look at all the characteristics and their properties. If their properties have 'read', read those properties and see what kinda information you get. I think the Don Coleman plugin responds back with an ArrayBuffer. To convert an ArrayBuffer into a readable array of bytes, do
[].slice.call(new Uint8Array(value))
See if the documentation of the device matches the response you got back from the read. Are you expecting certain kind of values? If you don't know what to look for, it would be very hard to tell what is the relevant information looking at the array of bytes.
Hi guys I'm looking for a solution to that enable a user compare a image to a previously store image. For example, i take a picture on my iPhone of a chair and then it will compare to a locally store image, if the similarity is reasonable then it confirms the image and calls another action.
Most solutions I've been able to find require cloud processing on third party servers(Visioniq, moodstocks, kooaba etc). Is this because the iPhone doesn't have sufficient processing power to complete this task?
A small reference library will be stored on the device and referenced when needed.
Users will be able to index their own pictures for recognition.
Is there any such solution that anyone has heard of? My searches have only shown cloud solutions from the above mentioned BaaS providers.
Any help will be much appreciated.
Regards,
Frank
Try using OpenCV library in iOS
OpenCV
You can try using FlannBasedMatcher to match images. Documentation and code is available here You can see "Feature Matching FLANN" here.
I've got a table that I'd like to present. However, a lot the information in it is only useful in aggregated or visual form.
For example, the country column it itself is boring, but a aggregating all the entries of a country would be really useful. Coordinates are in there as well, so any solution should be able to present stuff on a map.
Note that the solution can be non-web, but I'd really prefer a web application everyone can access. What I've found so far is just the Google Maps API, but that's not very good at showing non-geographical information, is it?
Note that the table has a lot of dimensions, often nominal or ordinal (i.e. no numbers), so visual and plotting-focussed libraries are not that good.
EDIT: maybe that would help you, in absence of other answers
Today, this article popped into my RSS reader: Patterns of Destruction?: Visualizing Earthquake Data w/Tableau.
The author uses Tableau to visualize his data and mentions also Data Applied and GoodData.
Combine the Google Maps API with something like the Javascript Visualization Toolkit?
There are may libraries out there that might do the trick as well:
Raphael
Axis
...
I'm trying to come up with the largest possible group of friends that would theoretically get along with each other, i.e., each person in the group should know at least 50% of the other people in the group.
I'm trying to come up with an algorithm for this that doesn't take ridiculously long; Facebook's API/cross-server talk is pretty slow as is.
I was thinking I could start with the friend that has the most mutual friends with me first, and then add people to the group one by one. But who would I choose next?
Just interested in the theory, no code is necessary.
Edit: When I said "theory", what I really meant what's the next logical step in plain english :) I was hoping I could code this up in an afternoon, but I guess this is a bit more complicated than I anticipated, and I'm not sure I want to spend weeks delving into heavy graph theory. Nevertheless, maybe someone else will find this interesting.
MIT did some work on social graphing a while back. Although it used mobile phone data, the clustering algorithms and other systems should still apply, even though they are constructed using different inputs and criteria.
There is more MIT chatter about social graphing going on at the moment. Definitely the place to look for technical pointers on this kind of thing.
Whilst the problem of graph enumeration from a given node to it's edges is NP complete for most useful problems ... the application of the graph traversal and the wealth of information might help you make this more efficient:
For any node (profile) N, you could data-scrape using Google or something to find associated edges out. This means that you can harness a cache of the pages and Googles search technology to mitigate having to traverse the edges yourself.
Social profiles contain tons of meta-data. Developing a statistical analysis method for working out the likelyhood of A knowing B without a direct path might be useful. Afterall friends have a) similar locations and b) similar interests
Other data, seemingly irrelevant can provide a means for locating people likely to know eachother and then you can double check the edges. Things such as chatter on boards about a band or gig, or people mentioning "cat fight" when Kate smacked Mary in the mouth.
The data just needs looking at in the right way, in the same way MIT looked at geographical statistics to determine relationships through phones.
Good Luck
There is an Algorithm called SCAN-Algorithm with some precalculations the algorithm can cluster a network in a good speed.
You can find informations about the algorithm here: SCAN: A Structural Clustering Algorithm for Networks
This is more "broad", but see if it helps to get ideas.