Localstack: which port to use for ES rest api - localstack

I am using docker to run Localstack and image 0.11.1.
I turned on es service and exposed port 4566 - as according to the doc (https://github.com/localstack/localstack):
Starting with version 0.11.0, all APIs are exposed via a single edge service, which is accessible on http://localhost:4566 by default
I could successfully use AWS CLI to list domain names and create ones:
aws --endpoint-url=http://localhost:4566 es list-domain-names
aws --endpoint-url=http://localhost:4566 es create-elasticsearch-domain --domain-name my-domain --elasticsearch-version 7.4
But when I tried to index document
curl -XPUT http://localhost:4566/my-domain/_doc/1 -d '{"hello": "World"}' -H 'Content-Type: application/json'
it returned {"status": "running"} reponse to me and I saw the message in logs:
INFO:localstack.services.edge: Unable to find forwarding rule for host "localhost:4566", path "/my-domain/_doc/1", target header "", auth header ""
Then I added port 4571 to exposed ports by configuring it in docker-compose.yml and tried the same, but using http://localhost:4571/my-domain/_doc/1 url this time to index document.
curl -XPUT http://localhost:4571/my-domain/_doc/1 -d '{"hello": "World"}' -H 'Content-Type: application/json'
It worked.
I don't understand - according to the doc I should only use port 4566 but it does not work.
Am I missing something?
My docker-compose.yml with both ports exposed:
...
localstack:
container_name: "localstack"
image: localstack/localstack:0.11.1
privileged: true
ports:
- "4566:4566"
- "4571:4571"
environment:
- SERVICES=es
- START_WEB=0
- LAMBDA_REMOTE_DOCKER=0
- DATA_DIR=/tmp/localstack/data
...

From here, you can see this table:
Parameter
Description
Default
service.edgePort
Port number for Localstack edge service
4566
service.esPort
Port number for Localstack elasticsearch service
4571

Related

CloudRun: Debug authentication error from curl

I am spinning up a container (pod/Job) from a GKE.
I have set up the appropriate Service Account on the cluster's VMs.
Therefore, when I manually perform a curl to a specific CloudRun service endpoint, I can perform the request (and get authorized and have 200 in my response)
However, when I try to automate this by setting an image to run in a Job as follows, I get 401
- name: pre-upgrade-job
image: "google/cloud-sdk"
args:
- curl
- -s
- -X
- GET
- -H
- "Authorization: Bearer $(gcloud auth print-identity-token)"
- https://my-cloud-run-endpoint
Here are the logs on Stackdriver
{
httpRequest: {
latency: "0s"
protocol: "HTTP/1.1"
remoteIp: "gdt3:r787:ff3:13:99:1234:avb:1f6b"
requestMethod: "GET"
requestSize: "313"
requestUrl: "https://my-cloud-run-endpoint"
serverIp: "212.45.313.83"
status: 401
userAgent: "curl/7.59.0"
}
insertId: "29jdnc39dhfbfb"
logName: "projects/my-gcp-project/logs/run.googleapis.com%2Frequests"
receiveTimestamp: "2019-09-26T16:27:30.681513204Z"
resource: {
labels: {
configuration_name: "my-cloud-run-service"
location: "us-east1"
project_id: "my-gcp-project"
revision_name: "my-cloudrun-service-d5dbd806-62e8-4b9c-8ab7-7d6f77fb73fb"
service_name: "my-cloud-run-service"
}
type: "cloud_run_revision"
}
severity: "WARNING"
textPayload: "The request was not authorized to invoke this service. Read more at https://cloud.google.com/run/docs/securing/authenticating"
timestamp: "2019-09-26T16:27:30.673565Z"
}
My question is how can I see if an "Authentication" header does reach the endpoint (the logs do not enlighten me much) and if it does, whether it is appropriately rendered upon image command/args invocation.
In your job you use this container google/cloud-sdk which is a from scratch installation of gcloud tooling. It's generic, without any customization.
When you call this $(gcloud auth print-identity-token) you ask for the identity token of the service account configured in the gcloud tool.
If we put together this 2 paragraphs, you want to generate an identity token from a generic/blank installation of gcloud tool. By the way, you don't have defined service account in your gcloud and your token is empty (like #johnhanley said).
For solving this issue, add an environment variable like this
env:
- GOOGLE_APPLICATION_CREDENTIAL=<path to your credential.json>
I don't know where is your current credential.json of your running environment. Try to perform an echo of this env var to find it and forward it correctly to your gcloud job.
If you are on compute engine or similar system compliant with metadata server, you can get a correct token with this command:
curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=<URL of your service>"
UPDATE
Try to run your command outside of the gcloud container. Here the update of your job
- name: pre-upgrade-job
image: "google/cloud-sdk"
entrypoint: "bash"
args:
- -c
- "curl -s -X GET -H \"Authorization: Bearer $(gcloud auth print-identity-token)\" https://my-cloud-run-endpoint"
Not sure that works. Let me know
In your Job, gcloud auth print-identity-token likely does not return any tocken.
The reason is that locally, gcloud uses your identity to mint a token, but in a Job, you are not logged into gcloud.

Keycloak-gatekeeper: 'aud' claim and 'client_id' do not match

What is the correct way to set the aud claim to avoid the error below?
unable to verify the id token {"error": "oidc: JWT claims invalid: invalid claims, 'aud' claim and 'client_id' do not match, aud=account, client_id=webapp"}
I kinda worked around this error message by hardcoding aud claim to be the same as my client_id. Is there any better way?
Here is my docker-compose.yml:
version: '3'
services:
keycloak-proxy:
image: "keycloak/keycloak-gatekeeper"
environment:
- PROXY_LISTEN=0.0.0.0:3000
- PROXY_DISCOVERY_URL=http://keycloak.example.com:8181/auth/realms/realmcom
- PROXY_CLIENT_ID=webapp
- PROXY_CLIENT_SECRET=0b57186c-e939-48ff-aa17-cfd3e361f65e
- PROXY_UPSTREAM_URL=http://test-server:8000
ports:
- "8282:3000"
command:
- "--verbose"
- "--enable-refresh-tokens=true"
- "--enable-default-deny=true"
- "--resources=uri=/*"
- "--enable-session-cookies=true"
- "--encryption-key=AgXa7xRcoClDEU0ZDSH4X0XhL5Qy2Z2j"
test-server:
image: "test-server"
With recent keycloak version 4.6.0 the client id is apparently no longer automatically added to the audience field 'aud' of the access token.
Therefore even though the login succeeds the client rejects the user.
To fix this you need to configure the audience for your clients (compare doc [2]).
Configure audience in Keycloak
Add realm or configure existing
Add client my-app or use existing
Goto to the newly added "Client Scopes" menu [1]
Add Client scope 'good-service'
Within the settings of the 'good-service' goto Mappers tab
Create Protocol Mapper 'my-app-audience'
Name: my-app-audience
Choose Mapper type: Audience
Included Client Audience: my-app
Add to access token: on
Configure client my-app in the "Clients" menu
Client Scopes tab in my-app settings
Add available client scopes "good-service" to assigned default client scopes
If you have more than one client repeat the steps for the other clients as well and add the good-service scope.
The intention behind this is to isolate client access. The issued access token will only be valid for the intended audience.
This is thoroughly described in Keycloak's documentation [1,2].
Links to recent master version of keycloak documentation:
[1] https://github.com/keycloak/keycloak-documentation/blob/master/server_admin/topics/clients/client-scopes.adoc
[2] https://github.com/keycloak/keycloak-documentation/blob/master/server_admin/topics/clients/oidc/audience.adoc
Links with git tag:
[1] https://github.com/keycloak/keycloak-documentation/blob/f490e1fba7445542c2db0b4202647330ddcdae53/server_admin/topics/clients/oidc/audience.adoc
[2] https://github.com/keycloak/keycloak-documentation/blob/5e340356e76a8ef917ef3bfc2e548915f527d093/server_admin/topics/clients/client-scopes.adoc
This is due to a bug: https://issues.jboss.org/browse/KEYCLOAK-8954
There are two workarounds described in the bug report, both of which appear to do basically the same thing as the accepted answer here but can be applied to the Client Scope role, so you don't have to apply them to every client individually.
If, like me, you want to automate the keycloak config, you can use kcadm
/opt/jboss/keycloak/bin/kcadm.sh \
create clients/d3170ee6-7778-413b-8f41-31479bdb2166/protocol-mappers/models -r your-realm \
-s name=audience-mapping \
-s protocol=openid-connect \
-s protocolMapper=oidc-audience-mapper \
-s config.\"included.client.audience\"="your-audience" \
-s config.\"access.token.claim\"="true" \
-s config.\"id.token.claim\"="false"
Its works to me:
In my SecurityConfiguration class:
#Bean
public CorsConfigurationSource corsConfigurationSource() {
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
CorsConfiguration config = new CorsConfiguration();
config.setAllowCredentials(true);
config.setAllowedOrigins(Arrays.asList("http://localhost:3000"));
config.setAllowedMethods(Arrays.asList(CorsConfiguration.ALL));
config.setAllowedHeaders(Arrays.asList(CorsConfiguration.ALL));
config.setAllowCredentials(true);
source.registerCorsConfiguration("/**", config);
return source;
}

Unable to create marklogic rest api instance

I am trying to create rest-api instance with the following configuration:
rest-api.json
{
"rest-api": {
"name": "restdb-api",
"database": "restdb",
"port": "8003",
"xdbc-enabled": true,
"forests-per-host": 1,
"error-format": "json"
}
}
curl --anyauth --user admin:admin -i -X POST -d #"./REST/rest-api.json" -H "Content-type: application/json" http://localhost:8002/LATEST/rest-apis
The endpoint returns 201 created, but I am unable to access the created endpoint at http://localhost:8003. I have tried using other ports, but the same thing is happening. The port 8003 is not listening. Please help me solve this problem.
HTTP/1.1 401 Unauthorized
Server: MarkLogic
WWW-Authenticate: Digest realm="public", qop="auth", nonce="36473d01f5e45a:ND9/6NHD0sw9o2y/xad/uQ==", opaque="e9594a1b7e019a97"
Content-Type: text/html; charset=utf-8
Content-Length: 209
Connection: Keep-Alive
Keep-Alive: timeout=5
HTTP/1.1 201 Created
Server: MarkLogic
Content-Length: 0
Connection: Keep-Alive
Keep-Alive: timeout=5
Since you said you are running on a local docker container, you might need to publish the port.
See Docker Expose
Please note from the link that - "The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports."
Hope that helps!

I can't connect from the outside to the mongo-express

I am using the mongo-express.
Installed on AWS EC2, it was started.
$ node app
Mongo Express server listening on port 8081 at localhost
Database connected
Connecting to db...
Database db connected
However, it is not possible to connect from the browser to port 8081.
I can download the index.html of the mongo-express using wget command on ec2.
$ wget http://admin:pass#localhost:8081
--2016-02-22 02:22:25-- http://admin:*password*#localhost:8081/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8081... connected.
HTTP request sent, awaiting response... 401 Unauthorized
Authentication selected: Basic realm="Authorization Required"
Reusing existing connection to localhost:8081.
HTTP request sent, awaiting response... 200 OK
Length: 9319 (9.1K) [text/html]
Saving to: ?index.html?
index.html 0%[ ] 0 --.-KB/s GET / 200 218.468 ms - 9319
index.html 100%[===================>] 9.10K --.-KB/s in 0.04s
2016-02-22 02:22:26 (236 KB/s) - ?index.html? saved [9319/9319]
By the way, port 8081 in the security group of ec2 is open to my IP.
The following settings of config.js is, was the cause
site: { // baseUrl: the URL that mongo express will be located at - Remember to add the forward slash at the stard and end!
baseUrl: '/',
cookieKeyName: 'mongo-express',
cookieSecret: process.env.ME_CONFIG_SITE_COOKIESECRET || 'cookiesecret',
host: process.env.VCAP_APP_HOST || 'localhost',
port: process.env.VCAP_APP_PORT || 8081,
requestSizeLimit: process.env.ME_CONFIG_REQUEST_SIZE || '50mb',
sessionSecret: process.env.ME_CONFIG_SITE_SESSIONSECRET || 'sessionsecret',
sslCert: process.env.ME_CONFIG_SITE_SSL_CRT_PATH || '',
sslEnabled: process.env.ME_CONFIG_SITE_SSL_ENABLED || false,
sslKey: process.env.ME_CONFIG_SITE_SSL_KEY_PATH || '',
},
The value of the host is changed to "0.0.0.0", now to be able to connect from browser to the mongo-express.
In my case, I've got the issue because I wanted to expose my container on another port (4301).
But the express was still listening on 8081.
To fix it, had to specify indeed VCAP_APP_HOST and VCAP_APP_PORT.
And you can directly specify it on the run cmd like:
docker run --network YOUR_NETWORK --name YOUR_MONGO_EXPRESS_CONTAINER_NAME -e ME_CONFIG_MONGODB_SERVER=YOUR_MONGO_SERVER_IP -e VCAP_APP_HOST=0.0.0.0 -e VCAP_APP_PORT=4301 -p 4301:4301 mongo-express

Unable to configure nginx as mail proxy

I need to use nginx as a mail proxy. I am completely new to nginx and need some help with the configuration.
Here is what I did:
First I built a service that mocks the authentication services described here: http://wiki.nginx.org/NginxMailCoreModule. For example,
curl -v -H "Host:auth.server.hostname" -H "Auth-Method:plain" -H "Auth-User:user" -H "Auth-pass:123" -H "Auth-Protocol:imap" -H "Auth-Login-Attempt:1" -H "Client-IP: 192.168.1.1" http://localhost:8080/authorize
returns the following response header:
< HTTP/1.1 200 OK
< Content-Type: text/html;charset=ISO-8859-1
< Auth-Status: OK
< Auth-Server: 192.168.1.10
< Auth-Port: 110
Second I installed nginx on my mac after installing macports:
$ sudo port -d selfupdate
$ sudo port install nginx
Third I created an nginx.conf with the following:
worker_processes 1;
error_log /var/log/nginx/error.log info;
mail {
server_name <my mail server here>;
auth_http http://localhost:8080/authorize;
pop3_auth plain apop cram-md5;
pop3_capabilities "LAST" "TOP" "USER" "PIPELINING" "UIDL";
xclient off;
server {
listen 110;
protocol pop3;
proxy on;
proxy_pass_error_message on;
}
}
Here is what I got running nginx:
$ nginx -V
nginx version: nginx/1.2.4
configure arguments: --prefix=/opt/local --with-cc-opt='-I/opt/local/include -O2' --with-ld-opt=-L/opt/local/lib --conf-path=/opt/local/etc/nginx/nginx.conf --error-log-path=/opt/local/var/log/nginx/error.log --http-log-path=/opt/local/var/log/nginx/access.log --pid-path=/opt/local/var/run/nginx/nginx.pid --lock-path=/opt/local/var/run/nginx/nginx.lock --http-client-body-temp-path=/opt/local/var/run/nginx/client_body_temp --http-proxy-temp-path=/opt/local/var/run/nginx/proxy_temp --http-fastcgi-temp-path=/opt/local/var/run/nginx/fastcgi_temp --http-uwsgi-temp-path=/opt/local/var/run/nginx/uwsgi_temp --with-ipv6
$ nginx
nginx: [emerg] unknown directive "mail" in /opt/local/etc/nginx/nginx.conf:6
The only mention of that error on the web brings up a discussion in Russian...
My questions:
Why am I getting this unknow directive?
Does my config look correct at first sight or am I missing some key component for the mail proxy to work using the authentication approach described here: http://wiki.nginx.org/NginxMailCoreModule?
I got the mail proxy working so I will answer my own questions for future reference:
nginx doesn't install support for mail by default
The following is needed for nginx to process the mail directive:
$ sudo port edit nginx
==> add --with-mail at the end of the config parameters
Then (re)install nginx
In the config I included, I was missing the events:
events {
worker_connections 1024;
}
An important clarification that got me stuck for a while: the authentication service (specified with auth_http) needs to return the mail server expressed as an IP address, not a host name.
Obviously for nginx to proxy on both inbound and outbound traffic, the smtp listener needs to be added. Similar approach as with the pop3 configuration. In my case, I used port 2525, so I had
server {
listen 2525;
protocol smtp;
}