I have a front on which forms for data entry, I enter data there and they go to the backend, but I need to intercept them and write them to the database. I have a script that writes data to a database, but I don't understand how to intercept the data. I am using the Flask framework. Help me please!
#app.route('/')
def main_page():
return "<html><head></head><body>A RESTful API in Flask using.</a>.</body></html>"
#app.route('/api/v1/reports/', methods='GET'):
Related
I am trying to run queries in our ERP system to collect large sums of data. I can access our Epicor queries via REST API V2. I'm troubleshooting in Postman with the goal of having Python automate the data collection.
My header in Postman contains my basic HTTPS authentication and API key. I'm using the suffix /Data to show the data from this specific query.
API call in postman using basic HTTPs authentication (username/pass) and API key in the header.
However, in python I can't figure out how to extract the same data. I've tried the API HTTPS address with /?api-key=" "/Data?, but it only returns the metadata of the query. I believe this is a syntax issue within python that I'm missing because it clearly works in postman.
How to I correctly call this API in python to extract the full data and not just the metadata? Image of metadata returned by Python (removed "/Data?" from query)
Note that no combination of suffix (i.e. /Data?, &metadata#data) works in Python. It only returns the metadata, and nothing else.
I am trying to make a single route Flask web app in which the user submits a URL to the client, some data from the URL is extracted at the server-side and appended to a Postgres DB. Next, further processing is done on the data extracted from the URL and the data entry of the URL in the DB is updated. All this processing is done as background tasks using celery. The work flow looks something like this:
#app.route
def app_route(url):
chain(task1(url) | task2(url))
#celery.task
def task1(url):
out1 = some_long_task(url)
append url,out1 to db
#celery.task
def task2(url):
out2 = some_other_long_task(url)
update url row in db with out2
The reason we do this is that, these two tasks are long tasks, and the user should be able to see the status of the task in the client. Hence, we update the out1 in the DB first so the user can know the status and then with out2 so the user can know the final status. At any point, the user can visit the home page which displays the URLs in the DB currently with their data.
This throws me an error: psycopg2.OperationalError) SSL error: decryption failed or bad record mac
The url with out1 is appended to the DB correctly without issue. But the second time when we try to update the row we appended in the previous task, it throws the above error. I am guessing flask sqlalchemy only allows a single session to a DB in one request, hence the error. What can I do to solve this issue?
I solved the error. Switched from flask-sqlalchemy to sqlalchemy to connect to the database since it offers more visibility. One should dispose the engine object by calling engine.dispose() after connecting the app to the database and the error disappears.
I have an API run on flask which is connected to MongoDB and it uses this DB for reading only.
I connect to db before first request:
#app.before_first_request
def load_dicti():
c = MongoClient('mongodb://' + app.config['MONGO_DSN'], connect=False)
db = c.my_name
app.first, app.second = dictionary_compilation(db.my_base, another_dictionary)
However, this mongodb may be updated from time to time. My API doesn't know about it because this db was already loaded before first request.
What's the most efficient way to cope with it? I'd be grateful for explanations and code examples.
I don't quite figure out what you are going to do, but Application Context may be best practice. Just like demo in Flask docs, you could do:
def get_db():
"""Opens a new database connection if there is none yet for the
current application context.
"""
if not hasattr(g, 'db'):
c = MongoClient('mongodb://' + app.config['MONGO_DSN'], connect=False)
g.db = c.my_name
return g.db
Then, you could use get_db() directly in your view function, mongdb will be conntected once only when there is no db attr in g.
If your connection is not that stable that you need to change it everytime, you could connect every request or every session.
Is it possible to create a H2OFrame using the H2O's REST API and if so how?
My main objective is to utilize models stored inside H2O so as to make predictions on external H2OFrames.
I need to be able to generate those H2OFrames externally from JSON (I suppose by calling an endpoint)
I read the API documentation but couldn't find any clear explanation.
I believe that the closest endpoints are
/3/CreateFrame which creates random data and /3/ParseSetup
but I couldn't find any reliable tutorial.
Currently there is no REST API endpoint to directly convert some JSON record into a Frame object. Thus, the only way forward for you would be to first write the data to a CSV file, then upload it to h2o using POST /3/PostFile, and then parse using POST /3/Parse.
(Note that POST /3/PostFile endpoint is not in the documentation. This is because it is handled separately from the other endpoints. Basically, it's an endpoint that takes an arbitrary file in the body of the post request, and saves it as "raw data file").
The same job is much easier to do in Python or in R: for example in order to upload some dataset into h2o for scoring, you only need to say
df = h2o.H2OFrame(plaindata)
I am already doing something similar in my project. Since, there is no REST API endpoint to directly convert JSON record into a Frame object. So, I am doing the following: -
1- For Model Building:- first transfer and write the data into the CSV file where h2o server or cluster is running.Then import data into the h2o using POST /3/ImportFiles, and then parse and build a model etc. I am using the h2o-bindings APIs (RESTful APIs) for it. Since I have a large data (hundreds MBs to few GBs), so I use /3/ImportFiles instead POST /3/PostFile as latter is slow to upload large data.
2- For Model Scoring or Prediction:- I am using the Model MOJO and POJO. In your case, you use POST /3/PostFile as suggested by #Pasha, if your data is not large. But, as per h2o documentation, it's advisable to use the MOJO or POJO for model scoring or prediction in a production environment and not to call h2o server/cluster directly. MOJO and POJO are thread safe, so you can scale it using multithreading for concurrent requests.
I am new in iphone and i work a project in which required to put database on the server.
I take link on the server and create our table on the server.
i search on the net i find that below process.
The Client Side Application:
1) receive input from the user (or client app event)
2) generate SQL request
3) XML encode the request if necessary
4) pass the request to the web service
The Server Side application:
1) receive request
2) decode (XML parse) the input if necessary
3) analyze the request
4) perform the DB query
5) XML encode the query results
6) return the results
The Client Side application:
1) receive results
2) decode (XML parse) the results if necessary
3) analyze the results if necessary
4) generate DB query to update client DB if necessary
5) perform the DB query to update client DB if necessary
6) display results if necessary
but i have no idea to implement it please tell me anyone how i implement it?
or
is it possible this in my application?
Thanks in advance
The cool kids are using RESTful Web Services for this type of distributed application architecture. You need a team do do this successfully.