I'm trying to make use of the new possibility to send HTTP requests to the TensorFlow ModelServer. However, when I try to run the following, it doesn't recognize the --rest_api_port argument:
tensorflow_model_server --rest_api_port=8501 \
--model_name=half_plus_three \
--model_base_path=$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_three/
unknown argument: --rest_api_port=8501
I've encountered the same error. I looked through the source code.
In main.cc file, there is no rest_api_port option in the source code version r1.7 below.
Due to this, if you want to use REST, you need to use tensorflow-serving r1.8 above or implement it your self.
Hope this is helpful to you.
Related
I am trying to implement TSOA with an existing HapiJS server and would like some insight on the best approach.
You can run tsoa spec-and-routes to generate routes.ts and swagger.json. However, running this manually before running the node process is less than ideal.
The solution would be then to run them programatically using the APIs provided by the TSOA library. However, when registering the routes with my Hapi server, I need to import the generated routes.ts file. e.g import RegisterRoutes from '../build/routes.ts.
So when I run the node process, generate the routes during this (programatically), it tries to grab '../build/routes.ts' before it has been built. Producing an error and the node proceess exits.
What is the way around this?
tsoa spec-and-routes && node bin/node ?
Any clarification would be greatly appreciated. Thanks.
I know that every command that I enter in Kubernetes communicate with API.
Now I want to speak to API directly.
How can I find json format for every command?
I suggest using a client library if you talk from a programming language:
https://kubernetes.io/docs/reference/#api-client-libraries
Or use kubectl if you talk from CLI. Hardcoding API schemas will add you a maintenance burden. You're basically reimplementing the client in this case.
Following is the kubernetes API reference docs, you can find equivalent API for each resources here:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/
Hope this helps
I run Spark in both client and cluster mode. Is there any rest url that can be used to kill running spark apps and drivers?
At the moment Spark has a hidden REST API. It's likely that in the future it will be public (see issue SPARK-12528). However, at the moment it's still "private", so you should use it at your own risk - meaning that if something changes in the API of the next Spark version, you need to update your code.
Otherwise, you can use Spark-server, but this will bring along more packages/dependencies, which you might not need.
curl -X PUT 'http://localhost:8088/ws/v1/cluster/apps/application_1524528223375_0082/state' -d '{"state": "KILLED"}'
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Application_State_API
If running on yarn, you can use "yarn application -kill application_XXXX_ID" to kill a application.
This command can also be issued using YARN REST APIs, with an decent description of calls listed here or in the official docs
The blog post apache-spark-hidden-rest-api uses actually the YARN REST API.
Thus said, the above is possible only on YARN.
Please try this if you have submissionId:-
curl -X POST http://spark-cluster-ip:6066/v1/submissions/kill/driver-20151008145126-0000
I'm using orientdb-community-2.2-alpha and I'm trying to use JSON payloads with the HTTP command API (as in calling http://<host>:<port>/command/<database>/sql) and I simply can't figure out how to do it. All I get is an OCommandExecutorNotFoundException saying Cannot find a command executor for the command request: sql.<whatever JSON I tried here> no matter what I try.
I'm not providing an example of what I've tried as I'm not trying to do any one specific thing; I would just like to see a working curl example of how to post a generic command request using a JSON payload.
I can use JSON with batch requests just fine, it's just the command API that I can't get to work.
You can try with Postman plugin
I am currently using jpmml openscoring REST API..
I have successfully installed Maven and built the uber-JAR file and I am also able to access
http://localhost:8080/OpenScoring/rules.pmml
I am confused with the instructions given at
https://github.com/jpmml/openscoring.
It says the sample curl invocation is
curl -X GET htttp://localhost:8080/openscoring/model
but I am getting a 404 error when I try to implement this. What does model mean here?
I am getting an output when I implement this:
curl -X GET htttp://localhost:8080/Openscoring/rules.pmml
The /model/ part of the path identifies the resource type. The general formula for the path component of Openscoring service URLs is /<context path>/<resource type>/<resource identifier>/<action>
In your case (assuming that the model identifier is rules.pmml), the correct path component would be /openscoring/model/rules.pmml.
I was getting a 404 error as I did not put my rules.pmml file in the directory which my command prompt pointed to.(This was a very silly mistake)
And thanks to the user Anik Islam Abhi in the comments, I found out what model in the invocation
curl -X GET htttp://localhost:8080/openscoring/model meant.
model is just an Endpoint of the REST source but not a directory in the openscoring folder or any kind of path.