We use symfony with the platform api in docker and we have a problem with varnish.
For local development varnish works with the default.vcl configuration file (https://github.com/api-platform/api-platform/blob/master/api/docker/varnish/conf/default.vcl). When uploading to the server, varnish provides the error "Error (null) Backend fetch failed"
When disabling varnish and redirecting to nginx with php-fpm, api-platform works properly.
I increased the http_resp_hdr_len parameters to 131072 bytes (128K) and http_rasp_size to 10485760 bytes (10Mb), but it doesn't help, and the error remains.
docker command to start varnish:
CMD ["varnishd", "-F", "-f", "/etc/varnish/default.vcl", "-p", "http_resp_hdr_len=131072", "-p", "http_resp_size=10485760"]
Also the parameter - .first_byte_timeout = 600s; was added to default.vcl varnish
default.vcl
vcl 4.0;
import std;
backend default {
.host = "api";
.port = "80";
.first_byte_timeout = 600s;
# Health check
.probe = {
.url = "/";
.timeout = 5s;
.interval = 50s;
.window = 5;
.threshold = 3;
}
}
# Hosts allowed to send BAN requests
acl invalidators {
"localhost";
"php";
# local Kubernetes network
"10.0.0.0"/8;
"172.16.0.0"/12;
"192.168.0.0"/16;
}
sub vcl_recv {
if (req.restarts > 0) {
set req.hash_always_miss = true;
}
# Remove the "Forwarded" HTTP header if exists (security)
unset req.http.forwarded;
# To allow API Platform to ban by cache tags
if (req.method == "BAN") {
if (client.ip !~ invalidators) {
return (synth(405, "Not allowed"));
}
if (req.http.ApiPlatform-Ban-Regex) {
ban("obj.http.Cache-Tags ~ " + req.http.ApiPlatform-Ban-Regex);
return (synth(200, "Ban added"));
}
return (synth(400, "ApiPlatform-Ban-Regex HTTP header must be set."));
}
# For health checks
if (req.method == "GET" && req.url == "/healthz") {
return (synth(200, "OK"));
}
}
sub vcl_hit {
if (obj.ttl >= 0s) {
# A pure unadulterated hit, deliver it
return (deliver);
}
if (std.healthy(req.backend_hint)) {
# The backend is healthy
# Fetch the object from the backend
return (restart);
}
# No fresh object and the backend is not healthy
if (obj.ttl + obj.grace > 0s) {
# Deliver graced object
# Automatically triggers a background fetch
return (deliver);
}
# No valid object to deliver
# No healthy backend to handle request
# Return error
return (synth(503, "API is down"));
}
sub vcl_deliver {
# Don't send cache tags related headers to the client
unset resp.http.url;
# Comment the following line to send the "Cache-Tags" header to the client (e.g. to use CloudFlare cache tags)
unset resp.http.Cache-Tags;
}
sub vcl_backend_response {
# Ban lurker friendly header
set beresp.http.url = bereq.url;
# Add a grace in case the backend is down
set beresp.grace = 1h;
}
Please advise what might be a problem with varnish and how to make it work correctly?
The problem was solved as follows - I took the parameters for running varnish from thomasmoreaumaster (https://github.com/api-platform/api-platform/issues/1367).
CMD ["varnishd", "-F", "-f", "/etc/varnish/default.vcl", "-p", "http_resp_hdr_len=128k", "-p", "http_resp_size=128k", "-p", "http_req_hdr_len=64k", "-p", "workspace_backend=256k", "-p", "workspace_client=256k", "-p", "http_max_hdr=256"]
Also in the proxying nginx api-platform removed directory binding with ssl and formed an nginx-proxy with ssl enabled via volume.
Now varnish works well.
Thanks for your help and support.
Related
I'm working on an automating a hashicorp vault process, and I need to repeatedly run the vault operator init command because of trial and error testing, I tried uninstalling vault and installing it again, but it seems like that doesn't remove the previous unseal keys + root token it generates, how can I do this?
I read somewhere that I needed to delete my storage "file" path which I already did but its not working (Actually my /opt/vault/data/ directory is empty), here is my vault.hcl file:
# Full configuration options can be found at
https://www.vaultproject.io/docs/configuration
ui = true
#mlock = true
#disable_mlock = true
storage "file" {
path = "/opt/vault/data"
}
#storage "consul" {
# address = "127.0.0.1:8500"
# path = "vault"
#}
# HTTP listener
#listener "tcp" {
# address = "127.0.0.1:8200"
# tls_disable = 1
#}
# HTTPS listener
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/opt/vault/tls/tls.crt"
tls_key_file = "/opt/vault/tls/tls.key"
}
# Enterprise license_path
# This will be required for enterprise as of v1.8
#license_path = "/etc/vault.d/vault.hclic"
# Example AWS KMS auto unseal
#seal "awskms" {
# region = "us-east-1"
# kms_key_id = "REPLACE-ME"
#}
# Example HSM auto unseal
#seal "pkcs11" {
# lib = "/usr/vault/lib/libCryptoki2_64.so"
# slot = "0"
# pin = "AAAA-BBBB-CCCC-DDDD"
# key_label = "vault-hsm-key"
# hmac_key_label = "vault-hsm-hmac-key"
#}
Best practice for this type of setup is actually terraform or chef or any other stateful transformer. That way you can bring the environment to an ideal state (terraform apply) and easily removed (terraform destroy).
To reinit vault, you can bring it down, delete the data folder: "/opt/vault/data" in your case. Bring up another instance.
Delete /opt/vault/data
Reboot your computer
(You many also need to delete the file located at ~/.vault-token)
If you want to do the testing only why don't you use the vault in dev mode?
I am trying to deploy Kube State Metrics into the kube-system namespace in my EKS Cluster (eks.4) running Kubernetes v1.14.
Kubernetes Connection
provider "kubernetes" {
host = var.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster_auth.token
cluster_ca_certificate = base64decode(var.cluster.certificate)
load_config_file = true
}
Deployment Manifest (as .tf)
resource "kubernetes_deployment" "kube_state_metrics" {
metadata {
name = "kube-state-metrics"
namespace = "kube-system"
labels = {
k8s-app = "kube-state-metrics"
}
}
spec {
replicas = 1
selector {
match_labels = {
k8s-app = "kube-state-metrics"
}
}
template {
metadata {
labels = {
k8s-app = "kube-state-metrics"
}
}
spec {
container {
name = "kube-state-metrics"
image = "quay.io/coreos/kube-state-metrics:v1.7.2"
port {
name = "http-metrics"
container_port = 8080
}
port {
name = "telemetry"
container_port = 8081
}
liveness_probe {
http_get {
path = "/healthz"
port = "8080"
}
initial_delay_seconds = 5
timeout_seconds = 5
}
readiness_probe {
http_get {
path = "/"
port = "8080"
}
initial_delay_seconds = 5
timeout_seconds = 5
}
}
service_account_name = "kube-state-metrics"
}
}
}
}
I have deployed all the required RBAC manifests from https://github.com/kubernetes/kube-state-metrics/tree/master/kubernetes as well - redacted here for brevity.
When I run terraform apply on the deployment above, the Terraform output is as follows :
kubernetes_deployment.kube_state_metrics: Still creating... [6m50s elapsed]
Eventually timing out at 10m.
Here are the outputs of the logs for the kube-state-metrics pod
I0910 23:41:19.412496 1 main.go:140] metric white-blacklisting: blacklisting the following items:
W0910 23:41:19.412535 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
W0910 23:41:19.412565 1 client_config.go:546] error creating inClusterConfig, falling back to default config: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
F0910 23:41:19.412782 1 main.go:148] Failed to create client: invalid configuration: no configuration has been provided
Adding the following to the spec has taken me to a successful deployment.
automount_service_account_token = true
For posterity :
resource "kubernetes_deployment" "kube_state_metrics" {
metadata {
name = "kube-state-metrics"
namespace = "kube-system"
labels = {
k8s-app = "kube-state-metrics"
}
}
spec {
replicas = 1
selector {
match_labels = {
k8s-app = "kube-state-metrics"
}
}
template {
metadata {
labels = {
k8s-app = "kube-state-metrics"
}
}
spec {
automount_service_account_token = true
container {
name = "kube-state-metrics"
image = "quay.io/coreos/kube-state-metrics:v1.7.2"
port {
name = "http-metrics"
container_port = 8080
}
port {
name = "telemetry"
container_port = 8081
}
liveness_probe {
http_get {
path = "/healthz"
port = "8080"
}
initial_delay_seconds = 5
timeout_seconds = 5
}
readiness_probe {
http_get {
path = "/"
port = "8080"
}
initial_delay_seconds = 5
timeout_seconds = 5
}
}
service_account_name = "kube-state-metrics"
}
}
}
}
I didn't try with terraform.
I have just run this deployment locally i got the same error.
Please run your deployment locally to see the state of your deployment and pods.
I0910 13:25:49.632847 1 main.go:140] metric white-blacklisting: blacklisting the following items:
W0910 13:25:49.632871 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
and finally:
I0910 13:25:49.634748 1 main.go:185] Testing communication with server
I0910 13:25:49.650994 1 main.go:190] Running with Kubernetes cluster version: v1.12+. git version: v1.12.8-gke.10. git tree state: clean. commit: f53039cc1e5295eed20969a4f10fb6ad99461e37. platform: linux/amd64
I0910 13:25:49.651028 1 main.go:192] Communication with server successful
I0910 13:25:49.651598 1 builder.go:126] Active collectors: certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,limitranges,namespaces,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses
I0910 13:25:49.651607 1 main.go:226] Starting metrics server: 0.0.0.0:8080
I0910 13:25:49.652149 1 main.go:201] Starting kube-state-metrics self metrics server: 0.0.0.0:8081
verification:
Connected to kube-state-metrics (xx.xx.xx.xx) port 8080 (#0)
GET /metrics HTTP/1.1
Host: kube-state-metrics:8080
User-Agent: curl/7.58.0
Accept: */*
HTTP/1.1 200 OK
Content-Type: text/plain; version=0.0.4
Date: Tue, 10 Sep 2019 13:39:52 GMT
Transfer-Encoding: chunked
[49027 bytes data]
HELP kube_certificatesigningrequest_labels Kubernetes labels converted to
Prometheus labels.
If you are building own image please follow issues on gihtub and docs
update: Just to clarify.
AS mentioned in my answer. I didn't try with terraform but it seems that the first question described only one problem W0910 13:25:49.632871 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
So I suggested to run this deployment locally and verify all errors from the logs. Later occurred that there is a problem with automount_service_account_token. This important errors wasn't applied to the the original question.
So please follow terraform issues on github how you can manage to solve this problem
As per description on github:
I spent hours trying to figure out why a service account and deployment wasn't working in Terraform, but worked with no issues in kubectl - it was the AutomountServiceAccountToken being hardcoded to False in the deployment resource.
At a minimum this should be documented in the Terraform docs for the resource with something noting the resource does not behave like kubectl does.
I hope it explains this problem.
I have built my job jar using sbt assembly to have all dependencies in one jar. When I try to submit my binary to spark-jobserver I am getting akka.pattern.AskTimeoutException
I modified my configuration to be able to submit large jars (I added parsing.max-content-length = 300m to my configuration) I also increased some of timeouts in configuration but nothing helped.
After I run:
curl --data-binary #matching-ml-assembly-1.0.jar localhost:8090/jars/matching-ml
I am getting:
{
"status": "ERROR",
"result": {
"message": "Ask timed out on [Actor[akka://JobServer/user/binary-manager#1785133213]] after [3000 ms]. Sender[null] sent message of type \"spark.jobserver.StoreBinary\".",
"errorClass": "akka.pattern.AskTimeoutException",
"stack": ["akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)", "akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)", "scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)", "scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)", "scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)", "akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:331)", "akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:282)", "akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:286)", "akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:238)", "java.lang.Thread.run(Thread.java:745)"]
}
My configuration:
# Template for a Spark Job Server configuration file
# When deployed these settings are loaded when job server starts
#
# Spark Cluster / Job Server configuration
spark {
# spark.master will be passed to each job's JobContext
master = "local[4]"
# master = "mesos://vm28-hulk-pub:5050"
# master = "yarn-client"
# Default # of CPUs for jobs to use for Spark standalone cluster
job-number-cpus = 4
jobserver {
port = 8090
context-per-jvm = false
# Note: JobFileDAO is deprecated from v0.7.0 because of issues in
# production and will be removed in future, now defaults to H2 file.
jobdao = spark.jobserver.io.JobSqlDAO
filedao {
rootdir = /tmp/spark-jobserver/filedao/data
}
datadao {
# storage directory for files that are uploaded to the server
# via POST/data commands
rootdir = /tmp/spark-jobserver/upload
}
sqldao {
# Slick database driver, full classpath
slick-driver = slick.driver.H2Driver
# JDBC driver, full classpath
jdbc-driver = org.h2.Driver
# Directory where default H2 driver stores its data. Only needed for H2.
rootdir = /tmp/spark-jobserver/sqldao/data
# Full JDBC URL / init string, along with username and password. Sorry, needs to match above.
# Substitutions may be used to launch job-server, but leave it out here in the default or tests won't pass
jdbc {
url = "jdbc:h2:file:/tmp/spark-jobserver/sqldao/data/h2-db"
user = ""
password = ""
}
# DB connection pool settings
dbcp {
enabled = false
maxactive = 20
maxidle = 10
initialsize = 10
}
}
# When using chunked transfer encoding with scala Stream job results, this is the size of each chunk
result-chunk-size = 1m
}
# Predefined Spark contexts
# contexts {
# my-low-latency-context {
# num-cpu-cores = 1 # Number of cores to allocate. Required.
# memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, 1G, etc.
# }
# # define additional contexts here
# }
# Universal context configuration. These settings can be overridden, see README.md
context-settings {
num-cpu-cores = 2 # Number of cores to allocate. Required.
memory-per-node = 2G # Executor memory per node, -Xmx style eg 512m, #1G, etc.
# In case spark distribution should be accessed from HDFS (as opposed to being installed on every Mesos slave)
# spark.executor.uri = "hdfs://namenode:8020/apps/spark/spark.tgz"
# URIs of Jars to be loaded into the classpath for this context.
# Uris is a string list, or a string separated by commas ','
# dependent-jar-uris = ["file:///some/path/present/in/each/mesos/slave/somepackage.jar"]
# Add settings you wish to pass directly to the sparkConf as-is such as Hadoop connection
# settings that don't use the "spark." prefix
passthrough {
#es.nodes = "192.1.1.1"
}
}
# This needs to match SPARK_HOME for cluster SparkContexts to be created successfully
# home = "/home/spark/spark"
}
# Note that you can use this file to define settings not only for job server,
# but for your Spark jobs as well. Spark job configuration merges with this configuration file as defaults.
spray.can.server {
# uncomment the next lines for making this an HTTPS example
# ssl-encryption = on
# path to keystore
#keystore = "/some/path/sjs.jks"
#keystorePW = "changeit"
# see http://docs.oracle.com/javase/7/docs/technotes/guides/security/StandardNames.html#SSLContext for more examples
# typical are either SSL or TLS
encryptionType = "SSL"
keystoreType = "JKS"
# key manager factory provider
provider = "SunX509"
# ssl engine provider protocols
enabledProtocols = ["SSLv3", "TLSv1"]
idle-timeout = 60 s
request-timeout = 20 s
connecting-timeout = 5s
pipelining-limit = 2 # for maximum performance (prevents StopReading / ResumeReading messages to the IOBridge)
# Needed for HTTP/1.0 requests with missing Host headers
default-host-header = "spray.io:8765"
# Increase this in order to upload bigger job jars
parsing.max-content-length = 300m
}
akka {
remote.netty.tcp {
# This controls the maximum message size, including job results, that can be sent
# maximum-frame-size = 10 MiB
}
}
I came to the similar issue. The way how to solve it is a bit tricky. First you need to add spark.jobserver.short-timeout to your configuration. Just modify your configuration like this:
jobserver {
port = 8090
context-per-jvm = false
short-timeout = 60s
...
}
The second (tricky) part is you can't fix it without modifying code of the spark-job-application. The attribute which cause timeout is in class BinaryManager:
implicit val daoAskTimeout = Timeout(3 seconds)
The default is set to 3 second which apparently for big jar is not enough. You can increase it to for example 60 second which solve problem for me.
implicit val daoAskTimeout = Timeout(60 seconds)
Actually you can bring down the size of the jars easily. Also some of the dependent jars can be passed using dependent-jar-uris instead of assembling into one big fat jar.
So, i just installed gitlab on my server. I'm running the bundled nginx on port 256, and i've set up https using let's ecnrypt. There's still a small problem, you can access it through a normal http address which will throw an nginx error since my external address is https://example.com:256. So i set the redirect_http_to_https setting and now all the requests just timeout... Any ideas?
My gitlab.rb config:
## Url on which GitLab will be reachable.
## For more details on configuring external_url see:
## https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/config$
external_url 'https://example.com:256'
#####################
# GitLab Web server #
#####################
## see: https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/doc/settings/nginx.md#using-a-non-bundled-web-server
## When bundled nginx is disabled we need to add the external webserver user to the GitLab webserver group.
# web_server['external_users'] = []
# web_server['username'] = 'gitlab-www'
# web_server['group'] = 'gitlab-www'
# web_server['uid'] = nil
# web_server['gid'] = nil
# web_server['shell'] = '/bin/false'
# web_server['home'] = '/var/opt/gitlab/nginx'
################
# GitLab Nginx #
################
## see: https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/doc/settings/nginx.md
nginx['enable'] = true
# nginx['client_max_body_size'] = '250m'
nginx['redirect_http_to_https'] = true
# nginx['redirect_http_to_https_port'] = 8080
# nginx['ssl_client_certificate'] = "/etc/gitlab/ssl/ca.crt" # Most root CA's are included by default
# nginx['ssl_verify_client'] = "off" # enable/disable 2-way SSL client authentication
# nginx['ssl_verify_depth'] = "1" # if ssl_verify_client on, verification depth in the client certificates chain
nginx['ssl_certificate'] = "/etc/letsencrypt/live/example.com-0001/fullchain.pem"
nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/example.com-0001/privkey.pem"
# nginx['ssl_ciphers'] = "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256"
# nginx['ssl_prefer_server_ciphers'] = "on"
# nginx['ssl_protocols'] = "TLSv1 TLSv1.1 TLSv1.2" # recommended by
https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html & https://cipherli.st/
# nginx['ssl_session_cache'] = "builtin:1000 shared:SSL:10m" # recommended in http://nginx.org/en/docs/http/ngx_http_ssl_module.html
# nginx['ssl_session_timeout'] = "5m" # default according to http://nginx.org/en/docs/http/ngx_http_ssl_module.html
# nginx['ssl_dhparam'] = nil # Path to dhparams.pem, eg. /etc/gitlab/ssl/dhparams.pem
# nginx['listen_addresses'] = ['*']
# nginx['listen_port'] = nil # override only if you use a reverse proxy: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md#setting-the-nginx-listen-port
# nginx['listen_https'] = nil # override only if your reverse proxy internally communicates over HTTP: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md#supporting-proxied-ssl
nginx['custom_gitlab_server_config'] = "location ^~ /.well-known { root /var/www/letsencrypt; }"
# nginx['custom_nginx_config'] = "include /etc/nginx/conf.d/example.conf;"
# nginx['proxy_read_timeout'] = 3600
# nginx['proxy_connect_timeout'] = 300
# nginx['proxy_set_headers'] = {
# "Host" => "$http_host",
# "X-Real-IP" => "$remote_addr",
# "X-Forwarded-For" => "$proxy_add_x_forwarded_for",
# "X-Forwarded-Proto" => "https",
# "X-Forwarded-Ssl" => "on"
# }
# nginx['proxy_cache_path'] = 'proxy_cache keys_zone=gitlab:10m max_size=1g levels=1:2'
# nginx['proxy_cache'] = 'gitlab'
# nginx['http2_enabled'] = true
# nginx['real_ip_trusted_addresses'] = []
# nginx['real_ip_header'] = nil
# nginx['real_ip_recursive'] = nil
Uncomment:
# nginx['redirect_http_to_https_port'] = 8080
Make it port 80 like the following:
nginx['redirect_http_to_https_port'] = 80
i'm part of a team that is developing an application that uses the Fiware GE's has part of the Smart-AgriFood accelerator.
We are using the Orion Context Broker for gathering the data provided by the sensor network, and we intend to use the Pep-Proxy to authenticate the sensor node for access the Orion instance. We have tried the following pepProxy's:
https://github.com/telefonicaid/fiware-orion-pep
https://github.com/ging/fi-ware-pep-proxy
We only have success implementing the second (fi-ware-pep-proxy) implementation of the proxy. With the fiware-orion-pep we haven't been able to connect to the Keystone Global instance (account.lab.fi-ware.org), we have tried the account.lab... and the cloud.lab..., my question are:
1) is the keystone (IDM) instance for authentication the account.lab or the cloud.lab?? and what port's to use or address's?
2) is the fiware-orion-pep prepared for authenticate at the account.lab.fi-ware.org?? here is way i ask this:
This one works with the curl command at >> cloud.lab.fiware.org:4730/v2.0/tokens
{
"auth": {
"passwordCredentials": {
"username": "<my_user>",
"password": "<my_password>"
}
}
}'
This one does't work with the curl comand at >> account.lab.fi-ware.org:5000/v3/auth/tokens
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"domain": {
"name": "<my_domain>"
},
"name": "<my_user>",
"password": "<my_password>"
}
}
}
} }'
3) what is the implementation that i should be using for authenticate the devices or other calls to the Orion instance???
Here are the configuration that i used:
fiware-orion-pep
config.authentication = {
checkHeaders: true,
module: 'keystone',
user: '<my_user>',
password: '<my_password>',
domainName: '<my_domain>',
retries: 3,
cacheTTLs: {
users: 1000,
projectIds: 1000,
roles: 60
},
options: {
protocol: 'http',
host: 'account.lab.fiware.org',
port: 5000,
path: '/v3/role_assignments',
authPath: '/v3/auth/tokens'
}
};
fi-ware-pep-proxy (this one works), i have set the listing port to 1026 at the source code
var config = {};
config.account_host = 'https://account.lab.fiware.org';
config.keystone_host = 'cloud.lab.fiware.org';
config.keystone_port = 4731;
config.app_host = 'localhost';
config.app_port = '10026';
config.username = 'pepProxy';
config.password = 'pepProxy';
// in seconds
config.chache_time = 300;
config.check_permissions = false;
config.magic_key = undefined;
module.exports = config;
Thanks in advance for the time ... :)
The are currently some differences in how both PEP Proxies authenticate and validate against the global instances, so they do not behave in exactly the same way.
The one in telefonicaid/fiware-orion-pep was developed to fulfill the PEP Proxy requirements (authentication and validation against a Keystone and Access Control) in individual projects with their own Keystone and Keypass (a flavour of Access Control) installations, and so it evolved faster than the one in ging/fi-ware-pep-proxy and in a slightly different direction. As an example, the former supports multitenancy using the fiware-service and fiware-servicepath headers, while the latter is transparent to those mechanisms. This development direction meant also that the functionality slightly differs from time to time from the one in the global instance.
That being said, the concrete answer:
- Both PEP Proxies should be able to contact the global instance. If one doesn't, please, fill a bug in the issues of the Github repository and we will fix it as soon as possible.
- The ging/fi-ware-pep-proxy was specifically designed for accessing the global instance, so you should be able to use it as expected.
Please, if you try to proceed with the telefonicaid/fiware-orion-pep take note also that:
- the configuration flag authentication.checkHeaders should be false, as the global instance does not currently support multitenancy.
- current stable release (0.5.0) is about to change to next version (probably today) so maybe some of the problems will solve with the update.
Hope this clarify some of your doubts.
[EDIT]
1) I have already install the telefonicaid/fiware-orion-pep (v 0.6.0) from sources and from the rpm package created following the tutorial available in the github. When creating the rpm package, this is created with the following name pep-proxy-0.4.0_next-0.noarch.rpm.
2) Here is the configuration that i used:
/opt/fiware-orion-pep/config.js
var config = {};
config.resource = {
original: {
host: 'localhost',
port: 10026
},
proxy: {
port: 1026,
adminPort: 11211
} };
config.authentication = {
checkHeaders: false,
module: 'keystone',
user: '<##################>',
password: '<###################>',
domainName: 'admin_domain',
retries: 3,
cacheTTLs: {
users: 1000,
projectIds: 1000,
roles: 60
},
options: { protocol: 'http',
host: 'cloud.lab.fiware.org',
port: 4730,
path: '/v3/role_assignments',
authPath: '/v3/auth/tokens'
} };
config.ssl = {
active: false,
keyFile: '',
certFile: '' }
config.logLevel = 'DEBUG'; // List of component
config.middlewares = {
require: 'lib/plugins/orionPlugin',
functions: [
'extractCBAction'
] };
config.componentName = 'orion';
config.resourceNamePrefix = 'fiware:';
config.bypass = false;
config.bypassRoleId = '';
module.exports = config;
/etc/sysconfig/pepProxy
# General Configuration
############################################################################
# Port where the proxy will listen for requests
PROXY_PORT=1026
# User to execute the PEP Proxy with
PROXY_USER=pepproxy
# Host where the target Context Broker is located
# TARGET_HOST=localhost
# Port where the target Context Broker is listening
# TARGET_PORT=10026
# Maximum level of logs to show (FATAL, ERROR, WARNING, INFO, DEBUG)
LOG_LEVEL=DEBUG
# Indicates what component plugin should be loaded with this PEP: orion, keypass, perseo
COMPONENT_PLUGIN=orion
#
# Access Control Configuration
############################################################################
# Host where the Access Control (the component who knows the policies for the incoming requests) is located
# ACCESS_HOST=
# Port where the Access Control is listening
# ACCESS_PORT=
# Host where the authentication authority for the Access Control is located
# AUTHENTICATION_HOST=
# Port where the authentication authority is listening
# AUTHENTICATION_PORT=
# User name of the PEP Proxy in the authentication authority
PROXY_USERNAME=XXXXXXXXXXXXX
# Password of the PEP Proxy in the Authentication authority
PROXY_PASSWORD=XXXXXXXXXXXXX
In the files above i have tried the following parameters:
Keystone instance: account.lab.fiware.org or cloud.lab.fiware.org
User: pep or pepProxy or "user from fiware account"
Pass: pep or pepProxy or "user password from account"
Port: 4730, 4731, 5000
The result it's the same as before... the telefonicaid/fiware-orion-pep is unable to authenticate:
log file at /var/log/pepProxy/pepProxy
time=2015-04-13T14:49:24.718Z | lvl=ERROR | corr=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | trans=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | op=/v1/updateContext | msg=VALIDATION-GEN-003] Error connecting to Keystone authentication: KEYSTONE_AUTHENTICATION_ERROR: There was a connection error while authenticating to Keystone: 500
time=2015-04-13T14:49:24.721Z | lvl=DEBUG | corr=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | trans=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | op=/v1/updateContext | msg=response-time: 50745 statusCode: 500
result from the client console
{
"message": "There was a connection error while authenticating to Keystone: 500",
"name": "KEYSTONE_AUTHENTICATION_ERROR"
}
I'm doing something wrong here??