Error: "serving_default not found in signature def" when testing prediction - prediction

I followed this tutorial and came to the point where I can test a prediction using the following code:
{
"instances": [
{"csv_row": "44, Private, 160323, Some-college, 10, Married-civ-spouse, Machine-op-inspct, Husband, Black, Male, 7688, 0, 40, United-States", "key": "dummy-key"}
]
}
However, I am getting the following error:
{
"error": "{ \"error\": \"Serving signature name: \\\"serving_default\\\" not found in signature def\" }"
}
I presume the input format doesn't represent the expected input, but am not entirely should what should be expected.
Any ideas as to what is causing the example code to throw this error?

I finally figured it out: I loaded the tensorflow model in jupyter notebook and printed out the signatures:
new_model = tf.keras.models.load_model('modelPath')
print(list(new_model.signatures.keys()))
the result was: [u'predict']
so the command i used to get a prediction is:
georg#Georgs-MBP ~ % gcloud ai-platform predict
--model $MODEL_NAME
--version "v1"
--json-instances sample_input.json
--format "value(predictions[0].classes[0])"
--signature-name "predict"
result:
Using endpoint [https://europe-west3-ml.googleapis.com/]
<=50K

To add signature serving_default:
import tensorflow as tf
m = tf.saved_model.load("tf2-preview_inception_v3_classification_4")
print(m.signatures) # _SignatureMap({}) - Empty
t_spec = tf.TensorSpec([None,None,None,3], tf.float32)
c_func = m.__call__.get_concrete_function(inputs=t_spec)
signatures = {'serving_default': c_func}
tf.saved_model.save(m, 'tf2-preview_inception_v3_classification_5', signatures=signatures)
# Test new model
m5 = tf.saved_model.load("tf2-preview_inception_v3_classification_5")
print(m5.signatures) # _SignatureMap({'serving_default': <ConcreteFunction signature_wrapper(*, inputs) at 0x17316DC50>})

Related

Azure Database for PostgreSQL flexible server deployment fails with databaseName param error

I'm trying to deploy PostgreSQL managed service with bicep and in most cases get an error:
"code": "InvalidParameterValue",
"message": "Invalid value given for parameter databaseName. Specify a valid parameter value."
I've tried various names for the DB, even in last version of the script I add random suffix to made it unique. Anyway it finishes with error, but looks like service is working. Another unexplainable thing is that sometimes script finishes without error... It's part of my IaC scenario, i need to be able to rerun it many times...
bicep code:
param location string
#secure()
param sqlserverLoginPassword string
param rand string = uniqueString(resourceGroup().id) // Generate unique String
param sqlserverName string = toLower('invivopsql-${rand}')
param sqlserverAdminName string = 'invivoadmin'
param psqlDatabaseName string = 'postgres'
resource flexibleServer 'Microsoft.DBforPostgreSQL/flexibleServers#2021-06-01' = {
name: sqlserverName
location: location
sku: {
name: 'Standard_B1ms'
tier: 'Burstable'
}
properties: {
createMode: 'Default'
version: '13'
administratorLogin: sqlserverAdminName
administratorLoginPassword: sqlserverLoginPassword
availabilityZone: '1'
storage: {
storageSizeGB: 32
}
backup: {
backupRetentionDays: 7
geoRedundantBackup: 'Disabled'
}
}
}
Please follow this git issue here for a similar error that might help you to fix your problem.

JupyterHub - log current user

I use a custom logger to log who is currently doing any kind of stuff in Jupyterhub.
logging_config: dict = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"company": {
"()": lambda: MyFormatter(user=os.environ.get("JUPYTERHUB_USER", "Unknown"))
},
},
....
c.Application.logging_config = logging_config
Output:
{"asctime": "2022-06-29 14:13:43,773", "level": "WARNING", "name": "JupyterHub", "message": "Updating Hub route http://127.0.0.1:8081 \u2192 http://jupyterhub:8081", "user": "Unknown"
The logger itself works fine, but I am not able to log who was performing the action. In the Image I start, there is a JUPYTERHUB_USER env variable available. This seems to get passed from JupyterHub ( I don´t know how this is done exactly). But in JupyterHub I don´t have this variable available.
Is there a way to use it in JupyterHub, not just in the jupyterLab container?
This doesn't get you all the way there but it's a start - we add extra pod annotations/labels through KubeSpawner's extra_annotations using the cluster_options hook (see our helm chart for our complete daskhub setup):
dask-gateway:
gateway:
extraConfig:
optionHandler: |
from dask_gateway_server.options import Options, String, Select, Mapping, Float, Bool
from math import ceil
def cluster_options(user):
def option_handler(options):
extra_annotations = {
"hub.jupyter.org/username": user.name
}
default_extra_labels = {
"hub.jupyter.org/username": user.name,
}
return Options(
Select(
...
),
...,
handler=option_handler,
)
c.Backend.cluster_options = cluster_options
You can then poll pods with these labels to get real time usage. There may be a more direct way to do this though - not sure.

Pod "no2-pipeline-x5kpd-2954674781" is invalid: spec.volumes[3].name: Duplicate value: "no2-pvc"

Hi I am trying to run a Kubeflow pipeline.
Two steps will run in parallel and dump data to two different folders of PVC, then the third component will collect data from those to folders and merge them together and dump the merged data to another PVC folder.
Here are my pipeline codes:
vop = dsl.VolumeOp(
name='no2-pvc',
resource_name = "no2-pvc",
size="100Gi",
modes = dsl.VOLUME_MODE_RWO
)
##LOADING POSITIVE DATA##
load_positive_data = dsl.ContainerOp(
name='load_positive_data',
image=load_positive_data_image,
command="python",
arguments=[
"/app/load_positive_data.py",
],
pvolumes={"/mnt/positive/": vop.volume}).apply(gcp.use_gcp_secret("user-gcp-sa"))
##LOADING NEGATIVE DATA##
load_negative_data = dsl.ContainerOp(
name='load_negative_data',
image=load_negative_data_image,
command="python",
arguments=[
"/app/load_negative_data.py",
],
pvolumes={"/mnt/negative/": vop.volume}).apply(gcp.use_gcp_secret("user-gcp-sa"))
##MERGING POSITIVE AND NEGATIVE DATA##
marge_pos_neg_data = dsl.ContainerOp(
name='marge_pos_neg_data',
image=marged_data_image,
command="python",
arguments=[
"/app/merge_neg_pos.py"
],
pvolumes = {"/mnt/positive/": load_negative_data.pvolume, "/mnt/negative/": load_positive_data.pvolume}
#volumes={'/mnt': vop.after(load_negative_data, load_positive_data)}
).apply(gcp.use_gcp_secret("user-gcp-sa")).after(load_positive_data, load_negative_data)
##PROCESSING MARGED DATA##
process_marged_data = dsl.ContainerOp(
name='process_data',
image=perpare_merged_data_image,
command="python",
arguments=[
"/app/prepare_all_dataset.py"
],
pvolumes = {"/mnt/pos_neg": marge_pos_neg_data.pvolume}
).apply(gcp.use_gcp_secret("user-gcp-sa")).after(marge_pos_neg_data)
load-positive-data and load-negative-data are working fine but the marge-pos-neg-data step is giving the following error:
This step is in Error state with this message:
task 'no2-pipeline-x5kpd.marge-pos-neg-data'
errored: Pod "no2-pipeline-x5kpd-2954674781" is invalid:
spec.volumes[3].name: Duplicate value: "no2-pvc"
Hoping for your help to resolve the issue.
pvolumes={"/mnt/positive/": vop.volume}) and pvolumes={"/mnt/negative/": vop.volume}) was creating two separate pvc's.

K6 Get reqeust result in error against specific endpoint URL

I am new to K6 and is trying to use the tool to perform a Get request by verifying an API.
When the script is executed I get a warning that terminates the scrip. As far as my understanding is that this error is somewhat related to Go (if I have understood it correctly).
The result that I want to achieve is to be able to execute the Get request to the endpoint URL, but would appreciate any kind of feedback if I have done any incorrectly or should try an other approach.
Script:
import http from "k6/http";
import { check } from "k6";
export default function () {
var url =
"https://endpoint.example.to.cloud/api/reports/v1/SMOKETESTC6KP6NWX";
var headerParam = {
headers: {
"Content-Type": "application/json",
},
};
const response = http.get(url, headerParam);
check(response, {
"Response status reciving a 200 response ": (r) => r.status === 200,
});
let body = JSON.parse(response.body);
}
Output:
WARN[0000] Request Failed error="Get \"https://endpoint.example.to.cloud/api/reports/v1/SMOKETESTC6KP6NWX\": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0"
Changing URL endpoint:
If i change the URL endpoint (mockup url) like below, there will be no errors:
...
var url = "https://run.mocky.io/v3/16fa8113-57e0-4e47-99b9-b5c55da93d71";
...
Updated solution to run this locally:
In order to run this locally i had to add the certification and key:
Example:
export let options = {
...
tlsAuth: [
{
cert: open(`${__ENV.Certificate}`),
key: open(`${__ENV.Key}`),
},
],
};
In addition populate the execute command with --insecure-skip-tls-verify
Example:
k6 run -e Certificate=/home/cert/example_certification.crt -e Key=/home/cert/certification/example_key.key -e example.js --insecure-skip-tls-verify
k6 is written in Go, and the latest versions of Go have a breaking change in how they handle X.509 certificates: https://golang.org/doc/go1.15#commonname
As it says in the error message, you can temporarily allow the old behavior by setting a GODEBUG=x509ignoreCN=0 environment variable, but that will likely stop working in a few months with Go 1.17. Using the insecureSkipTLSVerify k6 option might also work, I haven't checked, but as the name implies, that stops any TLS verification and is insecure.
So the real solution is to re-generate your server-side certificate properly.

Passing environment variables to NOW

I am trying to pass firebase environment variables for deployment with now.
I have encoded these variables manually with base64 and added them to now with the following command:
now secrets add firebase_api_key_dev "mybase64string"
The encoded string was placed within speech marks ""
These are in my CLI tool and I can see them all using the list command:
now secrets ls
> 7 secrets found under project-name [499ms]
name created
firebase_api_key_dev 6d ago
firebase_auth_domain_dev 6d ago
...
In my firebase config, I am using the following code:
const config = {
apiKey: Buffer.from(process.env.FIREBASE_API_KEY, "base64").toString(),
authDomain: Buffer.from(process.env.FIREBASE_AUTH_DOMAIN,"base64").toString(),
...
}
In my now.json file I have the following code:
{
"env": {
"FIREBASE_API_KEY": "#firebase_api_key_dev",
"FIREBASE_AUTH_DOMAIN": "#firebase_auth_domain_dev",
...
}
}
Everything works fine in my local environment (when I run next) as I also have a .env file with these variables, yet when I deploy my code, I get the following error in my now console:
TypeError [ERR_INVALID_ARG_TYPE]: The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type undefined
Does this indicate that my environment variables are not being read? What's the issue here? It looks like they don't exist at all
The solution was to replace my existing now.json with:
{
"build":{
"env": {
"FIREBASE_API_KEY": "#firebase_api_key",
"FIREBASE_AUTH_DOMAIN": "#firebase_auth_domain",
"FIREBASE_DATABASE_URL": "#firebase_database_url",
"FIREBASE_PROJECT_ID": "#firebase_project_id",
"FIREBASE_STORAGE_BUCKET": "#firebase_storage_bucket",
"FIREBASE_MESSAGING_SENDER_ID": "#firebase_messaging_sender_id",
"FIREBASE_APP_ID": "#firebase_app_id",
"FIREBASE_API_KEY_DEV": "#firebase_api_key_dev",
"FIREBASE_AUTH_DOMAIN_DEV": "#firebase_auth_domain_dev",
"FIREBASE_DATABASE_URL_DEV": "#firebase_database_url_dev",
"FIREBASE_PROJECT_ID_DEV": "#firebase_project_id_dev",
"FIREBASE_STORAGE_BUCKET_DEV": "#firebase_storage_bucket_dev",
"FIREBASE_MESSAGING_SENDER_ID_DEV": "#firebase_messaging_sender_id_dev",
"FIREBASE_APP_ID_DEV": "#firebase_app_id_dev"
}
}
}
I was missing the build header.
I had to contact ZEIT support to help me identify this issue.