Curl query to tensorflow serving model to predict API breaks - rest

I'm using this tutorial TensorFlow Serving with Docker to query
curl -d '{"instances": [1.0, 2.0, 5.0]}' \
-X POST http://localhost:8501/v1/models/half_plus_two:predict
It returns
`C:\WINDOWS\system32>curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:8501/v1/models/half_plus_two:predict
curl: (3) [globbing] bad range in column 2
curl: (6) Could not resolve host: 2.0,
curl: (3) [globbing] unmatched close brace/bracket in column 4
{ "error": "JSON Parse error: Invalid value. at offset: 0" }`
But the docker is running fine.
PS E:\git_portable> docker run -t --rm -p 8501:8501 -v "E:\git_portable\serving\tensorflow_serving\servables\tensorflow\testdata\saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving
2019-11-10 07:11:17.037045: I tensorflow_serving/model_servers/server.cc:85] Building single TensorFlow model file config: model_name: half_plus_two model_base_path: /models/half_plus_two
2019-11-10 07:11:17.037797: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2019-11-10 07:11:17.037861: I tensorflow_serving/model_servers/server_core.cc:573] (Re-)adding model: half_plus_two
2019-11-10 07:11:17.158245: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: half_plus_two version: 123}
2019-11-10 07:11:17.158435: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: half_plus_two version: 123}
2019-11-10 07:11:17.158496: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: half_plus_two version: 123}
2019-11-10 07:11:17.158573: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/half_plus_two/00000123
2019-11-10 07:11:17.170610: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2019-11-10 07:11:17.172642: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-11-10 07:11:17.212202: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:202] Restoring SavedModel bundle.
2019-11-10 07:11:17.230431: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:151] Running initialization op on SavedModel bundle at path: /models/half_plus_two/00000123
2019-11-10 07:11:17.236016: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:311] SavedModel load for tags { serve }; Status: success. Took 77445 microseconds.
2019-11-10 07:11:17.237262: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:105] No warmup data file found at /models/half_plus_two/00000123/assets.extra/tf_serving_warmup_requests
2019-11-10 07:11:17.247605: I tensorflow_serving/core/loader_harness.cc:87] Successfully loaded servable version {name: half_plus_two version: 123}
2019-11-10 07:11:17.250931: I tensorflow_serving/model_servers/server.cc:353] Running gRPC ModelServer at 0.0.0.0:8500 ...
[warn] getaddrinfo: address family for nodename not supported
2019-11-10 07:11:17.252948: I tensorflow_serving/model_servers/server.cc:373] Exporting HTTP/REST API at:localhost:8501 ...
When I run plain curl to localhost, it returns fine.
C:\WINDOWS\system32>curl http://localhost:8501/v1/models/half_plus_two
{
"model_version_status": [
{
"version": "123",
"state": "AVAILABLE",
"status": {
"error_code": "OK",
"error_message": ""
}
}
]
}
What am I doing wrong here?

We had the same issue. Here ..
Windows's cmd doesn't support strings with single quotes. Use " and
escape the inner ones with \".
in this link: Windows: curl with json data on the command line
now,
`C:\WINDOWS\system32>curl -d "{\"instances\": [1.0, 2.0, 5.0]}" \
-X POST http://127.0.0.1:8501/v1/models/half_plus_two:predict
{
"predictions": [2.5, 3.0, 4.5
]
}`

Related

kong admin api giving 404 to /services path

I am trying to set up Kong API gateway inside the rancher Kubernetes in db mode(PostgreSQL PV) installed through helm with latest version. The admin API that Kong exposes doesn't have the services or routes path available. So calling the API with "/services" path is giving 404 error. The root path is giving 200 OK status
Calling the http://localhost:8001 is giving 200 OK status with the following response body:
curl localhost:8001
response:
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/acme.cert-manager.io",
"/apis/acme.cert-manager.io/v1",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1",
"/apis/apiregistration.k8s.io",
"/apis/apiregistration.k8s.io/v1",
"/apis/apps",
"/apis/apps/v1",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/autoscaling/v2",
"/apis/autoscaling/v2beta1",
"/apis/autoscaling/v2beta2",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v1beta1",
"/apis/catalog.cattle.io",
"/apis/catalog.cattle.io/v1",
"/apis/cert-manager.io",
"/apis/cert-manager.io/v1",
"/apis/certificates.k8s.io",
"/apis/certificates.k8s.io/v1",
"/apis/cluster.cattle.io",
"/apis/cluster.cattle.io/v3",
"/apis/configuration.konghq.com",
"/apis/configuration.konghq.com/v1",
"/apis/configuration.konghq.com/v1alpha1",
"/apis/configuration.konghq.com/v1beta1",
"/apis/coordination.k8s.io",
"/apis/coordination.k8s.io/v1",
"/apis/crd.projectcalico.org",
"/apis/crd.projectcalico.org/v1",
"/apis/discovery.k8s.io",
"/apis/discovery.k8s.io/v1",
"/apis/discovery.k8s.io/v1beta1",
"/apis/events.k8s.io",
"/apis/events.k8s.io/v1",
"/apis/events.k8s.io/v1beta1",
"/apis/flowcontrol.apiserver.k8s.io",
"/apis/flowcontrol.apiserver.k8s.io/v1beta1",
"/apis/flowcontrol.apiserver.k8s.io/v1beta2",
"/apis/management.cattle.io",
"/apis/management.cattle.io/v3",
"/apis/metallb.io",
"/apis/metrics.k8s.io",
"/apis/metrics.k8s.io/v1beta1",
"/apis/networking.k8s.io",
"/apis/networking.k8s.io/v1",
"/apis/node.k8s.io",
"/apis/node.k8s.io/v1",
"/apis/node.k8s.io/v1beta1",
"/apis/policy",
"/apis/policy/v1",
"/apis/policy/v1beta1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1",
"/apis/scheduling.k8s.io",
"/apis/scheduling.k8s.io/v1",
"/apis/storage.k8s.io",
"/apis/storage.k8s.io/v1",
"/apis/storage.k8s.io/v1beta1",
"/apis/ui.cattle.io",
"/apis/ui.cattle.io/v1",
"/healthz",
"/healthz/autoregister-completion",
"/healthz/etcd",
"/healthz/log",
"/healthz/ping",
"/healthz/poststarthook/aggregator-reload-proxy-client-cert",
"/healthz/poststarthook/apiservice-openapi-controller",
"/healthz/poststarthook/apiservice-openapiv3-controller",
"/healthz/poststarthook/apiservice-registration-controller",
"/healthz/poststarthook/apiservice-status-available-controller",
"/healthz/poststarthook/bootstrap-controller",
"/healthz/poststarthook/crd-informer-synced",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/kube-apiserver-autoregistration",
"/healthz/poststarthook/priority-and-fairness-config-consumer",
"/healthz/poststarthook/priority-and-fairness-config-producer",
"/healthz/poststarthook/priority-and-fairness-filter",
"/healthz/poststarthook/rbac/bootstrap-roles",
"/healthz/poststarthook/scheduling/bootstrap-system-priority-classes",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/healthz/poststarthook/start-cluster-authentication-info-controller",
"/healthz/poststarthook/start-kube-aggregator-informers",
"/healthz/poststarthook/start-kube-apiserver-admission-initializer",
"/livez",
"/livez/autoregister-completion",
"/livez/etcd",
"/livez/log",
"/livez/ping",
"/livez/poststarthook/aggregator-reload-proxy-client-cert",
"/livez/poststarthook/apiservice-openapi-controller",
"/livez/poststarthook/apiservice-openapiv3-controller",
"/livez/poststarthook/apiservice-registration-controller",
"/livez/poststarthook/apiservice-status-available-controller",
"/livez/poststarthook/bootstrap-controller",
"/livez/poststarthook/crd-informer-synced",
"/livez/poststarthook/generic-apiserver-start-informers",
"/livez/poststarthook/kube-apiserver-autoregistration",
"/livez/poststarthook/priority-and-fairness-config-consumer",
"/livez/poststarthook/priority-and-fairness-config-producer",
"/livez/poststarthook/priority-and-fairness-filter",
"/livez/poststarthook/rbac/bootstrap-roles",
"/livez/poststarthook/scheduling/bootstrap-system-priority-classes",
"/livez/poststarthook/start-apiextensions-controllers",
"/livez/poststarthook/start-apiextensions-informers",
"/livez/poststarthook/start-cluster-authentication-info-controller",
"/livez/poststarthook/start-kube-aggregator-informers",
"/livez/poststarthook/start-kube-apiserver-admission-initializer",
"/logs",
"/metrics",
"/openapi/v2",
"/openapi/v3",
"/openapi/v3/",
"/readyz",
"/readyz/autoregister-completion",
"/readyz/etcd",
"/readyz/informer-sync",
"/readyz/log",
"/readyz/ping",
"/readyz/poststarthook/aggregator-reload-proxy-client-cert",
"/readyz/poststarthook/apiservice-openapi-controller",
"/readyz/poststarthook/apiservice-openapiv3-controller",
"/readyz/poststarthook/apiservice-registration-controller",
"/readyz/poststarthook/apiservice-status-available-controller",
"/readyz/poststarthook/bootstrap-controller",
"/readyz/poststarthook/crd-informer-synced",
"/readyz/poststarthook/generic-apiserver-start-informers",
"/readyz/poststarthook/kube-apiserver-autoregistration",
"/readyz/poststarthook/priority-and-fairness-config-consumer",
"/readyz/poststarthook/priority-and-fairness-config-producer",
"/readyz/poststarthook/priority-and-fairness-filter",
"/readyz/poststarthook/rbac/bootstrap-roles",
"/readyz/poststarthook/scheduling/bootstrap-system-priority-classes",
"/readyz/poststarthook/start-apiextensions-controllers",
"/readyz/poststarthook/start-apiextensions-informers",
"/readyz/poststarthook/start-cluster-authentication-info-controller",
"/readyz/poststarthook/start-kube-aggregator-informers",
"/readyz/poststarthook/start-kube-apiserver-admission-initializer",
"/readyz/shutdown",
"/version"
]
Calling the admin API root path with --head flag:
curl --head localhost:8001
response:
HTTP/1.1 200 OK Audit-Id: 68823bl0-9675-413a-8ef6-7th7df4d33z3
Cache-Control: no-cache, private Content-Type: application/json Date:
Mon, 20 Feb 2023 06:21:20 GMT X-Kubernetes-Pf-Flowschema-Uid:
bdb1c5f5-5e70-49b0-ba33-cb7420e90d89
X-Kubernetes-Pf-Prioritylevel-Uid:
29u9a7m6-b50j-404d-brg7-7f98b77c1ghb
**3. Calling the admin API with /services path to list all the services:
curl localhost:8001/services
response: 404 page not found**
Your response is not Kong API Gateway response, please ensure your kong is exposed to public.

micronaut runs with the DB on dev/test but not with the build image

I'm trying to use postgres on a micronaut project. When running locally, or by executing the tests, everything works correctly. But when I try to generate a docker image and run it I got the following error:
Caused by: org.hibernate.HibernateException: HR000048: Could not determine Dialect from JDBC driver metadata (specify a connection URI with scheme 'postgresql:', 'mysql:', 'cockroachdb', or 'db2:')
According to the official documentation:
When you move to production, you will need to configure the properties
injected by Test Resources to point at your real production database.
This can be done via environment variables like so:
I should set some application items described onthe JDBC section of the guide
So here is my application.yaml:
micronaut:
application:
name: micronaut-guide
netty:
default:
allocator:
max-order: 3
#tag::jpa[]
jpa:
default:
entity-scan:
packages:
- 'example.micronaut.domain'
properties:
hibernate:
show-sql: true
hbm2ddl:
auto: update
connection:
db-type: postgres
reactive: true
#end::jpa[]
datasources:
default:
driverClassName: org.postgresql.Driver
db-type: postgresql
schema-generate: CREATE_DROP
dialect: POSTGRES
jpa.default.properties.hibernate.hbm2ddl.auto: update
Plus, I've tried to pass some environment variable when running it:
docker run micronaut-postgres -p 8080:8080 --rm -e DATASOURCES_DEFAULT_URL=$DATASOURCES_DEFAULT_URL -e DATASOURCES_DEFAULT_USERNAME=$DATASOURCES_DEFAULT_USERNAME -e DATASOURCES_DEFAULT_PASSWORD=$DATASOURCES_DEFAULT_PASSWORD
◼ ~/Code/Personal/micronaut-postgres $ echo $DATASOURCES_DEFAULT_URL 130
jdbc:postgresql://localhost:3306/db
to be sure there are all the pieces here is my build.gradle:
plugins {
id("org.jetbrains.kotlin.jvm") version "1.6.21"
id("org.jetbrains.kotlin.kapt") version "1.6.21"
id("org.jetbrains.kotlin.plugin.allopen") version "1.6.21"
id("org.jetbrains.kotlin.plugin.jpa") version "1.6.21"
id("com.github.johnrengelman.shadow") version "7.1.2"
id("io.micronaut.application") version "3.6.7"
id("io.micronaut.test-resources") version "3.6.7"
}
version = "0.1"
group = "example.micronaut"
repositories {
mavenCentral()
}
dependencies {
kapt("io.micronaut.data:micronaut-data-processor")
kapt("io.micronaut:micronaut-http-validation")
kapt("io.micronaut.serde:micronaut-serde-processor")
implementation("io.micronaut:micronaut-http-client")
implementation("io.micronaut.data:micronaut-data-hibernate-reactive")
implementation("io.micronaut.kotlin:micronaut-kotlin-runtime")
implementation("io.micronaut.reactor:micronaut-reactor")
implementation("io.micronaut.reactor:micronaut-reactor-http-client")
implementation("com.ongres.scram:client:2.1")
implementation("io.micronaut.serde:micronaut-serde-jackson")
// implementation("io.vertx:vertx-mysql-client")
implementation("io.vertx:vertx-pg-client")
implementation("io.micronaut.sql:micronaut-vertx-pg-client")
implementation("jakarta.annotation:jakarta.annotation-api")
implementation("org.jetbrains.kotlin:kotlin-reflect:${kotlinVersion}")
implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8:${kotlinVersion}")
runtimeOnly("ch.qos.logback:logback-classic")
// testResourcesService("mysql:mysql-connector-java")
testResourcesService("org.postgresql:postgresql")
compileOnly("org.graalvm.nativeimage:svm")
implementation("io.micronaut.sql:micronaut-vertx-pg-client")
implementation("io.micronaut:micronaut-validation")
runtimeOnly("com.fasterxml.jackson.module:jackson-module-kotlin")
}
application {
mainClass.set("example.micronaut.ApplicationKt")
}
java {
sourceCompatibility = JavaVersion.toVersion("11")
}
tasks {
compileKotlin {
kotlinOptions {
jvmTarget = "11"
}
}
compileTestKotlin {
kotlinOptions {
jvmTarget = "11"
}
}
}
graalvmNative.toolchainDetection = false
micronaut {
runtime("netty")
testRuntime("junit5")
processing {
incremental(true)
annotations("example.micronaut.*")
}
testResources {
additionalModules.add("hibernate-reactive-postgresql")
}
}
tasks.named("dockerfile") {
baseImage = "eclipse-temurin:17.0.5_8-jre-jammy"
}
Another more standard way to replicate the problem
I got into the same problem with the official example on the micronaut website available here.
I can generate the docker image, run it and the same problem arises:
Step 8/8 : ENTRYPOINT ["java", "-jar", "/home/app/application.jar"]
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
---> Running in b4ac7195ad12
Removing intermediate container b4ac7195ad12
---> 98936716be8c
Successfully built 98936716be8c
Successfully tagged micronautguide:latest
Created image with ID '98936716be8c'.
...
# just checking the env var is there
$ echo $JDBC_URL
jdbc:mysql://production-server:3306/micronaut
$ docker run --rm -e DATASOURCES_DEFAULT_URL=$JDBC_URL -e DATASOURCES_DEFAULT_USERNAME=$JDBC_USER -e DATASOURCES_DEFAULT_PASSWORD=$JDBC_PASSWORD -it -p 8080:8080 98936716be8c 125
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
...
Caused by: org.hibernate.HibernateException: HR000048: Could not determine Dialect from JDBC driver metadata (specify a connection URI with scheme 'postgresql:', 'mysql:', 'cockroachdb', or 'db2:')

How to enable Javascript in Druid

I have been using Druid for the past week and wanted to enable javascript for some postAggregations.
I think I followed the outlined steps and updated the common.runtime.properties file in ../con f/druid/_common/ to include druid.javascript.enabled=true. I then stopped the current processes and re-ran the Quickstart procedures, but it still says that JavaScript is disabled:
{
"error" : "Unknown exception",
"errorMessage" : "Instantiation of [simple type, class io.druid.query.aggregation.post.JavaScriptPostAggregator] value failed: JavaScript is disabled. (through reference chain: java.util.ArrayList[0])",
"errorClass" : "com.fasterxml.jackson.databind.JsonMappingException",
"host" : null
}
I am currently running it in the 'Quickstart' configuration - single local machine. Any pointers? Thanks!
JavaScript Query For druid Aggregation. Save the file as .body and hit the curl request.
This is a sample query for Average value.
curl -X POST "http://localhost:8082/druid/v2/?pretty" \ -H
'content-type: application/json' -d #query.body
{
"queryType":"groupBy",
"dataSource":"whirldata",
"granularity":"all",
"dimensions":[],
"aggregations":[{"name":"rows","type":"count","fieldName":"rows"},
{"name":"TargetDOS","type":"doubleSum","fieldName":"Target DOS"}],"postAggregations":[
{
"type": "javascript",
"name": "Target DOS Average",
"fieldNames": ["TargetDOS", "rows"],
"function": "function(TargetDOS, rows) { return Math.abs(TargetDOS) / rows; }"
}], "intervals":[ "2006-01-01T00:00:00.000Z/2020-01-01T00:00:00.000Z" ]}
The part you are missing is likely that the quickstart reads configs from conf-quickstart rather than conf. So try editing conf-quickstart/druid/_common/common.runtime.properties.

Using the Barracuda REST API with Ansible - token authentication doesn't work

I'm trying to use the REST API by Barracuda ADC and/or WAF and, while it works when I use cURL (from the documentation):
Request:
$ curl
-X POST \
-H "Content-Type:application/json" \
-d '{"username": "admin", "password": "admin"}' \ http://10.11.19.104:8000/restapi/v2/login
Response:
{"token":"eyJldCI6IjEzODAyMzE3NTciLCJwYXNzd29yZCI6ImY3NzY2ZTFmNTgwMzgyNmE1YTAzZWZlMzcy\nYzgzOTMyIiwidXNlciI6ImFkbWluIn0=\n"}
Then we should use that token to execute commands on the API, something like:
$ curl
-X GET \
-H "Content-Type:application/json" \
-u 'eyJldCI6IjEzODAyMzE3NTciLCJwYXNzd29yZCI6ImY3NzY2ZTFmNTgwMzgyNmE1YTAzZWZlMzcy\nYzgzOTMyIiwidXNlciI6ImFkbWluIn0=\n': \
http://10.11.19.104:8000/restapi/v2/virtual_service_groups
And it'll give me a response listing (in this case) my virtual service groups, and it works with cURL.
Now, when I try to use ansible to do the same things, the first step to authenticate goes successfully (I can even use the generated token with cURL and it accepts it), but the second step to run the commands with the generated token always gives me 401 error (Invalid credentials):
- name: login into the load balancer
uri:
url: "{{ barracuda_url }}/login"
method: POST
body_format: json
body:
username: "{{ barracuda_user }}"
password: "{{ barracuda_pass }}"
headers:
Content-Type: application/json
return_content: yes
force_basic_auth: yes
register: login
tags: login, debug
- debug: msg="{{ login.json.token }}"
tags: debug
- name: get
uri:
url: "{{ barracuda_url }}/virtual_service_groups"
method: GET
body_format: json
user: "{{ login.json.token }}:"
headers:
Content-Type: application/json
return_content: yes
force_basic_auth: yes
register: response
Output of my playbook:
TASK [loadbalancer : login into the load balancer] *****************************
ok: [localhost]
TASK [loadbalancer : debug] ****************************************************
ok: [localhost] => {
"msg": "eyJldCI9IjE0ODQ2MDcxNTAiXCJwYXNzd29yZCI6IjRmM2TlYWMwN2ExNmUxYWFhNGEwNTU5NTMw\nZGQ3ZmM3IiwiaXNlciI6IndpYSJ9\n"
}
TASK [loadbalancer : get] ******************************************************
fatal: [localhost]: FAILED! => {"changed": false, "connection": "close", "content": "{\"error\":{\"msg\":\"Please log in to get valid token\",\"status\":401,\"type\":\"Invalid Credentials\"}}", "content_type": "application/json; charset=utf8", "date": "Mon, 16 Jan 2017 22:32:30 GMT", "failed": true, "json": {"error": {"msg": "Please log in to get valid token", "status": 401, "type": "Invalid Credentials"}}, "msg": "Status code was not [200]: HTTP Error 401: ", "redirected": false, "server": "BarracudaHTTP 4.0", "status": 401, "transfer_encoding": "chunked", "url": "http://10.11.19.104:8000/restapi/v2/virtual_service_groups"}
Remove the colon from "{{ login.json.token }}:" in user argument. Use:
user: "{{ login.json.token }}"
The colon is not a part of a username, but a curl syntax (used in your example to avoid an interactive password prompt):
-u, --user <user:password>
[ ]
If you simply specify the user name, curl will prompt for a password.

ServiceAddress is empty when using Consul Registrator

[
{
"Node": "consul-staging-a-1.org",
"Address": "10.0.11.221",
"ServiceID": "mesos-slave-staging-a-1.org:determined_bartik:5000",
"ServiceName": "service1",
"ServiceTags": null,
"ServiceAddress": "",
"ServicePort": 4003
},
{
"Node": "consul-staging-a-1.org",
"Address": "10.0.11.221",
"ServiceID": "mesos-slave-staging-a-1.org:angry_hypatia:5000",
"ServiceName": "service1",
"ServiceTags": null,
"ServiceAddress": "",
"ServicePort": 4007
}
]
This is what I get from querying the Consul service API (/v1/catalog/service/service1).
Commands that I used to start registrator and services:
docker run -d -v /var/run/docker.sock:/tmp/docker.sock -h $HOSTNAME gliderlabs/registrator consul://consul-staging-a-1.org:8500
docker run -d -p 4003:5000 -e "SERVICE_NAME=service1" docker-training/hello-world
docker run -d -p 4007:5000 -e "SERVICE_NAME=service1" docker-training/hello-world
Any steps I'm doing wrong? How do you assign the hostname to ServiceAddress field?
I had similar issue and found that before v6 Consul Registrator leaves ServiceAddress empty, but in v6 (at the moment the latest) it is "0.0.0.0". I've tried to use "-ip" option but it did not help, somehow it assigns internal IP address of the container. Found related issues at:
https://github.com/gliderlabs/registrator/issues/240
In my case I fixed it by binding container to IP address like:
docker run -d -p 10.0.0.3:4003:5000 -e "SERVICE_NAME=service1" docker-training/hello-world