Using PrintNode to Transfer data to Excel - rest

I am trying to take data for a USB-HID scale (dymo S100) and print it in Excel. I came across a program called PrintNode: https://www.printnode.com/docs/reading-usb-scales-over-the-internet/
It seems like the solution however I don't have enough knowledge to make use of the API to transfer data to Excel. Any advice would be a tremendous help. Thank you.

Related

accessing p-values in PySpark UnivariateFeatureSelector module

I'm currently in the process of performing feature selection on a fairly large dataset and decided to try out PySpark's UnivariateFeatureSelector module.
I've been able to get everything sorted out except one thing -- how on earth do you access the actual p-values that have been calculated for a given set of features? I've looked through the documentation and searched online and I'm wondering if you can't... but that seems like such a gross oversight for such this package.
thanks in advance!

Theta Sketch (Yahoo) on SnappyData

How to store Theta Sketch (Yahoo) on SnappyData's table instead of write to file? Because I generate billions of sketches every day and need to keep many millions of sketches online for real-time queries. Can anyone help me? Thanks.
Can’t you store these in a column with blob data type ?
If writing out from a spark program and managing the sketches in a DF , I would think df.write.format(“column”).saveAsTable should work.
Else, serialize (sketch.compact().toByteArray()) and store in blob column using sql.

streaming data to opentsdb

I have played a bit with opentsdb each time reading data from a txt file and writing it to a .txt file.
Is there a way to bypass the .txt file i.e. streaming some data directly to opentsdb?
I have seen the implementation with tcollector but I am looking for something more general that would scale with some pretty large data.
thanks for you help.
Philippe C.
ps I am as specific as I can but I know it isn't really clear if you have any questions that I haven't thought of, ask away!
The general way to get data into openTSDB is to use the HTTP Put api.
To scale with large amounts of data, you can have multiple openTSDB servers in a cluster and round robin the data between them with a http load balancer.

Read from MongoDB into sklearn (scikit-learn)

Apologies if this has been asked, but I have looked here and elsewhere on the web for an answer and nothing has been forthcoming.
The basic examples of using sklearn are either reading from a prepared sklearn dataset into memory, or reading from a file such as a .csv into memory and then processing. Can someone kindly provide an equivalent example for reading database data into sklearn for processing, preferably MongoDB, but I will take what I can get at this point. I have been struggling with this for a little while now.
I can post what I have done so far, but I don't think it will help. Thanks for any help/advice/pointers.

How can I create a web page that shows aggregate data from Sawtooth surveys?

I'm guessing this won't apply to 99.99% of anyone that sees this. I've been doing some Sawtooth survey programming at work and I've been needing to create a webpage that shows some aggregate data from the completed surveys. I was just wondering if anyone else has done this using the flat files that Sawtooth generates and how you went about doing it. I only know very basic Perl and the server I use does not have PHP so I'm somewhat at a loss for solutions. Anything you've got would be helpful.
Edit: The problem with offering example files is that it's more complicated. It's not a single file and it occasionally gets moved to a different file with a different format. The complexities added in there are why I ask this question.
Doesn't Sawtooth export into CSV format? There are many Perl parsers for CSV files. Just about every language has a CSV parser or two (or twelve), and MS Excel can open them directly, and they're still plaintext so you can look at them in any text editor.
I know our version of Sawtooth at work (which is admittedly very old) exports Sawtooth data into SPSS format, which can then be exported into various spreadsheet formats including CSV, if all else fails.
If you have a flat (fixed-width field) file, you can easily parse it in Perl using regular expressions or just taking substrings of each line one at a time, assuming you know the width of the fields. Your question is too general to give much better advice, sorry.
Matching the values up from a plaintext file with meta-data (variable names and labels, value labels etc.) is more complicated unless you already have the meta-data in some script-readable format. Making all of that stuff available on a web page is more complicated still. I've done it and it can be a bit of a lengthy project to roll your own. There are packages you can buy, like SDA, which will help you build a website where people can browse and download your survey data and view your codebooks.
Honestly though the easiest thing to do if you're posting statistical data on a website is get the data into SPSS or SAS or another statistics package format and post those files for download directly. Then you don't have to worry about it.