I am using mlflow with sqlite backend. started the server with:
mlflow server --backend-store-uri sqlite:///mlruns_db/mlruns.db --default-artifact-root $PWD/mlruns --host 0.0.0.0 -p 5000
in the code, I log the model with signature as such
...
signature = infer_signature(X, y)
mlflow.sklearn.log_model(model, model_name, signature=signature)
...
then I get warnings
2022/05/26 19:52:17 WARNING mlflow.models.model: Logging model metadata to the tracking server has failed, possibly due older server version. The model artifacts have been logged successfully under ./mlruns/1/d4c8f611d3f24986a32d19c7d8b03f06/artifacts. In addition to exporting model artifacts, MLflow clients 1.7.0 and above attempt to record model metadata to the tracking store. If logging to a mlflow server via REST, consider upgrading the server version to MLflow 1.7.0 or above.
I am using mlflow, version 1.24.0, though.
I see that the signature is correctly logged inside MLmodel file, but the nice rendering of mlflow ui is lost.
with logging signature
mlflow ui with logging signature
without logging signature
mlflow ui without logging signature
Does this have any consequence later when serving models with signature enforcement?
Also, I see many blog examples with postgres instead of sqlite, and sftp/minio instead of filestore. maybe changing to those setups will solve this?
Had a similar issue. Even after running both server and local on 1.30
you can import this:
import logging
logging.getLogger("mlflow").setLevel(logging.DEBUG)
The logging told me it was because the size of the paramaters exceeded the character limit of 5000. This was because my model signature was about 10k in length.
Related
I am trying to use JasperReports integration for the first time. I am using the included Jetty server, Oracle database XE 18c and Windows 7.
I am following the quick start guide https://github.com/daust/JasperReportsIntegration/blob/main/src/doc/github/installation-quickstart.md
I downloaded the zip folder, configuired database access through adding schema credentials in application.properties file as follows...
[datasource:default] type=jdbc
url=jdbc:oracle:thin:#localhost:1521:XEPDB1 username=hr password=hr
this parameter is limiting access to the integration for the specified
list of ip addresses, e.g.:
ipAddressesAllowed=127.0.0.1,10.10.10.10,192.168.178.31 if the list is
empty, ALL addresses are allowed.
Then I deployed the jri.war file successfully. Then I started the server successfully as well. But when I tried to test it through http://localhost:8090/, I got the following page, and I do not know if that's the norm or there's something wrong...
I need to know if testing is successful, and what's meant by "context" here?
Thanks
You deployed the jri.war to the context path /jri, this isn't an error, and is quite normal.
Just access your webapp via http://localhost:8080/jri/
In the AWS documentation for "Connecting to your DB instance using IAM authentication and the AWS SDK for Python (Boto3)", the following call is made to both psycopg2.connect (shown) and mysql.connector.connect:
conn = psycopg2.connect(host=ENDPOINT, port=PORT, database=DBNAME, user=USR, password=token, sslmode='prefer', sslrootcert="[full path]rds-combined-ca-bundle.pem")
cur = conn.cursor()
cur.execute("""SELECT now()""")
query_results = cur.fetchall()
print(query_results)
I see some discussion about the ssl_ca path (here and here) and what those bundles are used for. But none of the three links I've given here describe the [full path] component given by the AWS docs, or where it is pointing to. My current guess (from the second link) is this URL, but I'd like to be sure.
Additionally, what are the advantages to having this bundle downloaded to the remote EC2 on which these Python 3 (boto3) scripts are running?
EDIT: By the way, the above call to psycopg2.connect is working in Jupyter with Python 3.9.5 on an EC2 currently, with the [full path] written as-is...
You should replace the '[full path]' with the filesystem path (directory path) to where you saved the pem file when you downloaded it (from that last URL you gave) to the local computer.
The advantage of using it is that your client will verify it connected to the correct database, and not some malicious system which is intercepting your traffic. I don't how advantageous you consider this: if someone has compromised Amazon enough to be intercepting their internal traffic, they might also have compromised their CA as well. But there is at least some possibility they did one without the other.
Your code as shown does not work for me, because ssl_ca is not how it is spelled. Assuming you used the code actually given at your first link for PostgreSQL:
sslmode='prefer', sslrootcert="[full path]rds-combined-ca-bundle.pem"
Then the reason it works despite the bogus path is that "prefer" means it doesn't care if the rootcert is missing, it just skips validating in that case. If you change it to 'verify-full', then presumably it would stop working.
I'm running a test Dgraph instance in a dgraph/standalone Docker container, using the github.com/dgraph-io/dgo/v200/protos/api API on port 9080 to write data, but can't see the changes in the Console on port 8000. Using the API to query the previously written data works fine, so I wonder if the API and the Console are somehow using different name spaces?
Are you committing the transaction? I have seen users complaining about this, but they forgot to commit the txn.
I have fabric composer 0.72 installed on my mac, and I was able to follow this thread to get it connected to my Blockchain (v.61 of Fabric) on Bluemix.
fabric-composer-integration-with-bluemix-blockchain-service
Now I am trying to build an ubuntu (16.04) docker container and run composer-rest-server there. When I try to connect to my blockchain service from my docker container (using the same id, WebAppAdmin, that I used on my mac) I get an error:
Discovering types from business network definition ...
Connection fails: Error: Identity or token does not match.
It will be retried for the next request.
{ Error: Identity or token does not match.
at /home/composer/.nvm/versions/node/v6.10.3/lib/node_modules /composer-rest-server/node_modules/grpc/src/node/src/client.js:417:17 code: 2, metadata: Metadata { _internal_repr: {} } }
I tried copying the cert from my mac to my docker container:
/home/composer/.composer-credentials/member.WebAppAdmin
but when I did that I got a different error that says "signature does not verify". I did some additional testing, and I discovered that if I used an id that I had not previously used with composer (i.e. user_type1_0) then I could connect, and I could see a new cert in my .composer-credentials directory.
I tried deleting that container and building a new one (I dorked something else up) I could not use that same userid again.
Does anybody know how security and these certs are supposed to work? It would seem as though something to do with certificate generation/validation is tied to the client (i.e. hardware address), such that if I try to re-use an id on a different machine, the certs or keys or something don't match. I have a way to make things work, but it doesn't seem like it's the right way if I can't use the same id from different machines.
Thanks!
Hi i tried to recreate this by having blockchain running on a unix machine and then i copied my connection profile and certificate to my mac and then edited my connection profile to update the ip address and key store. I then did a composer network ping and it worked fine.
I am using composer v0.7.4 so you could try that?
I have also faced this issue, and concluded that
There is inconsistent behavior while deploying network using composer on Cloud environment includeing Bluemix. Problem is not with composer, but with fabric 0.6.
I am assuming that this issue is also indirectly related to following known bugs into fabric 0.6, which will not be fixed in fabric 0.6.
ERROR:
"
throw er; // Unhandled 'error' event
^
Error
at ClientDuplexStream._emitStatusIfDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._readsDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:158:8)
at readCallback (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:217:12)
"
So far, We have understood that following three JIRA are root cause , where essentially the cloud networking layer ends up killing the idle event hub connection after a period of inactivity and the fabric SDK cannot handle this.
https://jira.hyperledger.org/browse/FAB-4002 FAB-3310
https://jira.hyperledger.org/browse/FAB-3310
or FAB-2787
Conclusion:
There is no alternative way of fixing this issue with Bluemix or any cloud environment with fabric 0.6
You may not experience this issue with Fabric 1.0, but there is still possibilities as all above mentioned defects are not fixed yet.
Os is Mac Os Maverick.
In a jhipster context (last version, 1.2.2), I get an error when I request the default application on an entity I have just generate using yo jhipster:entity generator.
I run a yo jhipster to create a vanilla application with mongodb as database, java 7 and nothing special.
Then I run grunt build and grunt server for hot reload on the client part and mvn spring-boot:run for the server side app.
When I go to the http://localhost:8080/ url, I get the normal page. I can sign in with either the user or admin login.
I run the yo jhipster:entity foo to get an exemple of rest service in the back end.
When I request for the foo resource with the URL http://localhost:8080/#/foo, I get the page to CRUD the resource as it is said on the jhipster website.
But when I try to create a foo item with the modal form, I get an error on the back end server log ([WARN] org.springframework.web.servlet.PageNotFound - Request method 'POST' not supported).
I can't figure out how to solve this.
Do I miss something in the documentation ?
Do you have any idea, hint ?
I have the same issue using H2 as development database instead of mongodb.
Thanks.
Hervé
This might be due to MongoDB, if you have a date field.
We will release very soon a new and improved Entity sub-generator, which should work better for you. While testing it, I had a serialization issue with MongoDB and a date field, and I corrected it in this new version. This is due to Jackson which can't serialize Joda Time dates (the correct annotations were only generated for SQL databases, not NoSQL databases)