What’s the best way to profile python code when using datajoint? - datajoint

We have tried using line_profiler in juypter but it just tells us that the auto populate function is using all of the cpu time. Is there a simple way for us to get a more detailed profiling report from a .populate() call?

Related

Cacheing API Results using Flutter an Hive

What is the proper way to cache API results using Hive?
The current way I plan to implement it is using the request URL as the key and the returned data as the body.
Is there a proper way to do this more production friendly? I can't find a tutorial as most tutorials are abstracted by using another package that takes care of this for them or a tutorial uses a different package.
In order to cache rest API data, you can use hive which is a No-SQL database and is easier to use and faster to retrieve and is faster than shared preferences and sqflite.
For more details you can check this repo to understand better :
https://github.com/shashiben/Anime-details
And you can read this article: https://medium.com/flutter-community/flutter-cache-with-hive-410c3283280c
The code is written cleaner and is architectures by using stacked architecture. Hope this answer is helpful to you

Functional testing angular

I am required to functional test a web app built using angular 5. The app has a lot of chart widgets - basically reporting done through the chart based on the values in the sql server database for specific criteria given through the query bar. I basically have to check for dynamic changes in the charts based on updates - add/delete/change in the database. There are several different charts which get affected by those changes and I have to validate both the ui and db using automation. I have been reading that protractor can be used for e2e testing, would I be able to validate data updates and changes to the chart using protractor or please suggest me a tool for this. Also I am not seeing a lot of blogs for checking dynamically generated charts using protractor. Please help me with any material you can.
Could be really difficult to do using protractor if we are talking about a production-like environment, with dynamic data. Develop such an e2e test like the case that you are describing will, at the best case, provide you a flaky test that will provide you a large quantity of fake-failings.
If you are using a library to generate the charts like Highcharts, i would divide the testing on two pieces:
A) The easier part: Check that the endpoints that need to provide the data to the charts are retrieving properly the data comparing it to the present data on the DB. You could use a module like protractor-intercept ( https://www.npmjs.com/package/protractor-intercept ) to easily handle that. With these, you will be testing that the data arrives properly from the DB to Client.
B) Difficult part: Mock the data retrieved for those endpoints on a test environment ( yes, you will need help from development team). If you know the data that you are expecting, will be easier to assure that is being properly rendered on the front-end charts.
Those kind of tests are hard to deal. A months ago, I had to design one of those and finally the team decided to cover only the api responses, instead of all the e2e flow.

Calling import.io dataset created with "Chain API" via REST

I created a dataset using 2 extractors: a "many rows" extractor which is then called by a "just one row" extractor via the "URLs from another API / Chain APIs" option. This has given me the data that I need and I have saved it as a dataset. Although the dataset is working I'm lost as to how to use the REST API to retrieve it?
I'm aware that there used to be an "integrate" button on the dataset page which would outline how to use an external client library to get the data but in it's absence now I don't know which of the APIs to use and how to use them?
I've attempted at using the "Query Methods" GET call "/store/connector/{id}/_query" but it requires an "id" which I don't know where to find from? I attempted to use the "_connectorVersionGuid" value when I saved the dataset as JSON but that didn't work.
Any help and advice would be much appreciated.
Thanks,
AJ
AJ,
Francesco here, from import.io.
First of all, thanks for formulating the question so clearly.
I have bad news and good news.
The bad news is at the moment Bulk and Chain are only available as client side feature, so it's not really possible to call a chain with a single REST call.
The good news is that we are actually working on it :)
Bulk as an API is actually in beta testing, and I hope to have a Chain as an API as well.
A workaround I sometimes use myself is to use an external integration as a service platform, like Node-RED (http://nodered.org/) or built.io (https://www.built.io/)

Displaying web based visualisation or graphing of data based on a postgresql database?

I am working on a web application for a client that uses a postgresql database. I want the client to be able to go to a certain area of the site where the data from the database is displayed in graph form (for example, sales figures over a 6 month period). Is there a plugin I could use for this (I don't have any experience of this, so an easy one, or one with tutorials available would be great). I had a look at BIRT, which says it has a web based option but I couldn't really figure it out. I don't want the client to have to download and go through another program, I just want them to go to a url within their site, and it's all just presented to them there and then.
Any sort of pointers in the right direction would be greatly appreciated.
Thank you.
HighCharts, at http://www.highcharts.com/, works well for this case -- I use it fairly often. It supports Ajax data feeds in JSON format, so you can write an endpoint which returns the JSON representing the data from Postgres and which gets called from a JavaScript function which creates the graphs using that data (you would place that call in a ready function).
Also, if you're using Postgres 9.3 or higher, it supports JSON natively, so you can do the JSON conversion in the SQL query itself, as opposed to post-processing the results in your Python or other backend code.
Highcharts is reasonably flexible and allows for a variety of nice-looking, functional charts and graphs. If you want to get much fanicer, d3 may be worth a look. These are some the types of graphs/charts it can do: https://github.com/mbostock/d3/wiki/Gallery
I have not used d3 myself, however.
For the scenario you described above, Highcharts seems like it would work just fine.
It's been a while, and a lot has happened since 2016. There is now ChartJS as well - http://chartjs.org/, for example, which is easier to use than HighCharts and very flexible (I've used both).
What they both don't do is dynamic data. If you want that your client decides which data he wants to watch - that part you need to write yourself.

Writing arbitrary mongoDB queries to PHP backend

Im in the start up phase of creating an internal system based on PHP and MongoDB. The users of this system are Javascript programmers and would like to be able to make custom queries to the Mongo database from a frontend gui with arbitrary Mongo shell queries. Of course this would not be a problem at all if I forced them to to write the queries with proper PHP arrays etc, but i would definitely like to avoid this.
I am not quite sure how to approach a feature like this without writing some advanced methods being able to restructure the queries to proper formated arrays that can be used in MongoClient PHP. One approach would be making use of the i.e. MongoDB::execute() method and run the javascript on the database server - a method i don't fancy at all.
Im kindly asking if you have any ideas on how to achieve the requested functionalities to some extend.
Thank you in advance.
Are you looking for something like this : http://rockmongo.com/ ?