A rails3 application with Passenger4.0.53/nginx (installed via passenger) and postgresql 9.3.5 database is running... partially!
Calling the home URL is rendering the page as per the rails application controller=> home, action => index.
but hitting the login link, the proper URL is being generated, but the result is a 404 error. Restarting passenger, then ngingx has failed to change behaviour...
/etc/nginx/sites-enabled/default is set as follows:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name ranz2.iwant2go2.com;
passenger_enabled on;
rails_env development;
root /home/deploy/ranz/current/public;
# location / {
# try_files $uri $uri/ =404;
# passenger_enabled on;
# }
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
nginx error log
[ 2014-10-14 07:42:57.7851 24433/7f42e3109780 agents/Watchdog/Main.cpp:538 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'passenger_version' => '4.0.53', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_passenger_version' => '4.0.53', 'web_server_pid' => '24432', 'web_server_type' => 'nginx', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' }
[ 2014-10-14 07:42:57.8090 24436/7f55db281780 agents/HelperAgent/Main.cpp:650 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.24432/generation-0/request
[ 2014-10-14 07:42:57.8705 24441/7fbfd457b7c0 agents/LoggingAgent/Main.cpp:321 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.24432/generation-0/logging
[ 2014-10-14 07:42:57.8715 24433/7f42e3109780 agents/Watchdog/Main.cpp:728 ]: All Phusion Passenger agents started!
[ 2014-10-14 07:42:57.8865 24441/7fbfd457b7c0 agents/LoggingAgent/Main.cpp:289 ]: Caught signal, exiting...
Update seeing other similar questions, I disabled the 'passenger_enabled on;' instruction within the first location block. Restarted nginx and the application. No change in behaviour. Subsequently I disable the first location block... Restarted nginx and the application. No change in behaviour.
What am I missing?
The issue was in nginx.conf. My understanding was that passenger_ruby should point to the result of which ruby, which is what I toggled to:
passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
# passenger_ruby /usr/bin/ruby;
passenger_ruby /home/deploy/.rvm/rubies/ruby-1.9.3-p547/bin/ruby;
However, RTFM, that returns a passenger error. Whose contents point one in the proper direction (good on 'ya Phusion!). The manual, section "8.3.1. passenger_ruby " is explicit.
which passenger-config
paste that result in the subsequent commands
rvm use <desired_version>
<result_of_which passenger_config> --ruby-command
then from the output string you have the string to define passenger_ruby in nginx.conf, which should ressemble something like
/home/deploy/.rvm/gems/ruby-1.9.3-p547/wrappers/ruby
Related
I'm hoping this is just a stupid question, but after googling around and browsing the WildFly docs, I can't seem to link things up.
I have an EJB application which has been working since forever, but with JPA backed by the H2 database. The entity database has grown so large that the H2 implementation now has extreme performance problems and I need to migrate to a Derby backstore.
The problem is that the H2 DS is sort of "baked into" WF as ExampleDS, but Derby is not, and I can't seem to get Derby defined as a datasource. My first try was to define it with a module.xml, but I wasn't having much luck, so as I don't need domain, I opted to just drop derbyclient.jar into standalone/deployments, which seemed to work fine.
With the client JAR deployed, my attempt to define the DVDDerbyDS datasource gets this:
[jboss#ftgme2 ~/wildfly-23.0.0.Final/bin]$ ./jboss-cli.sh -c
[standalone#localhost:9990 /] /subsystem=datasources/data-source=DVDDerbyDS:add(
\
> jndi-name=java:jboss/datasources/DVDDerbyDS,\
> driver-name=derbyclient,\
> connection-url=jdbc:derby://localhost:1527/DVD\
> )
{
"outcome" => "failed",
"failure-description" => {
"WFLYCTL0412: Required services that are not installed:" => ["jboss.jdbc
-driver.derbyclient"],
"WFLYCTL0180: Services with missing/unavailable dependencies" => [
"org.wildfly.data-source.DVDDerbyDS is missing [jboss.jdbc-driver.de
rbyclient]",
"jboss.driver-demander.java:jboss/datasources/DVDDerbyDS is missing
[jboss.jdbc-driver.derbyclient]"
]
},
"rolled-back" => true
}
I'm guessing that this is a simple case of two operands that need to match not matching, probably the derbyclient info. But deploying derbyclient gives no hint of how to reference it, and every permutation I've tried has failed.
Any ideas ?
In the management console deployments, derbyclient.jar appears as deployed and enabled and is referred to only as derbyclient.jar. Modifying the CLI above still fails: –
[standalone#localhost:9990 /] /subsystem=datasources/data-source=DVDDerbyDS:add( jndi-name=java:jboss/datasources/DVDDerbyDS, driver-name=derbyclient.jar, connection-url=jdbc:derby://localhost:1527/DVD) –
{ "outcome" => "failed", "failure-description" => { "WFLYCTL0412: Required services that are not installed:" => ["jboss.jdbc -driver.derbyclient_jar"], "WFLYCTL0180: Services with missing/unavailable dependencies" => [ "org.wildfly.data-source.DVDDerbyDS is missing [jboss.jdbc-driver.de rbyclient_jar]", "jboss.driver-demander.java:jboss/datasources/DVDDerbyDS is missing [jboss.jdbc-driver.derbyclient_jar]" ] }, "rolled-back" => true } –
The same using derbyclient_jar. –
[standalone#localhost:9990 /] /subsystem=datasources:installed-drivers-list
{
"outcome" => "success",
"result" => [{
"driver-name" => "h2",
"deployment-name" => undefined,
"driver-module-name" => "com.h2database.h2",
"module-slot" => "main",
"driver-datasource-class-name" => "",
"driver-xa-datasource-class-name" => "org.h2.jdbcx.JdbcDataSource",
"datasource-class-info" => [{"org.h2.jdbcx.JdbcDataSource" => {
"URL" => "java.lang.String",
"description" => "java.lang.String",
"loginTimeout" => "int",
"password" => "java.lang.String",
"url" => "java.lang.String",
"user" => "java.lang.String"
}}],
"driver-class-name" => "org.h2.Driver",
"driver-major-version" => 1,
"driver-minor-version" => 4,
"jdbc-compliant" => true
}]
}
[standalone#localhost:9990 /]
but
[jboss#ftgme2 ~/wildfly-23.0.0.Final/standalone/deployments]$ cat derbyclient.jar.deployed
derbyclient.jar
[standalone#localhost:9990 /] /deployment=derbyclient.jar:browse-content(path=META-INF/services/)
{
"outcome" => "success",
"result" => [{
"path" => "java.sql.Driver",
"directory" => false,
"file-size" => 47L
}]
}
It looks like the derbyclient.jar does not contain everything you need. It worked for me with the following.
First you need to install the driver as a module:
# Set the path to Derby and add a module for the driver
set DERBY_HOME=/path/to/db-derby-10.15.2.0-bin
module add --name=org.apache.derby --resources=$DERBY_HOME/lib/derbyclient.jar,$DERBY_HOME/lib/derbytools.jar,$DERBY_HOME/lib/derbyshared.jar --dependencies=javax.api,javax.transaction.api --resource-delimiter=,
# Add the JDBC driver
/subsystem=datasources/jdbc-driver=derby:add(driver-module-name=org.apache.derby, driver-name=derby, driver-class-name=org.apache.derby.jdbc.ClientDriver)
Then you can add your data source:
/subsystem=datasources/data-source=DVDDerbyDS:add(jndi-name=java:jboss/datasources/DVDDerbyDS, driver-name=derby, connection-url=jdbc:derby://localhost:1527/DVD)
The following RE matches "http://my.domain/video.mp4" successfully but cannot match "http://my.domain/abc/video.mp4".
location ~ "^.+(mp4|mkv|m4a)$" {
root /home/user/Videos;
}
The log of Nginx reads
[05/May/2021:12:29:25 +0800] "GET /video.mp4 HTTP/1.1" status: 206, body_bytes: 1729881
[05/May/2021:12:29:46 +0800] "GET /abc/video.mp4 HTTP/1.1" status: 404, body_bytes: 555
This is wired. Actually, I want URLs under "/service1/" to be mapped to user1's directory and URLs under "/service2" to be mapped to user2's. So I write
location ~ "^/service1/.+(mp4|mkv|m4a)$" {
root /home/user1/Videos;
}
location ~ "^/service2/.+(mp4|mkv|m4a)$" {
root /home/user2/Videos;
}
And as the first example, this config cannot match anything.
I searched a lot on google. No answer can explain this. I want to get it to work. Thanks!
I know where the problem is. In my second config, if I require "/service1/abcd.mp4", Nginx will try to locate it at "/home/user1/Videos/service1/abcd.mp4" but actually the file is at "/home/user1/Videos/abcd.mp4". Theoretically, I can bypass it by rewrite
location ~ "^/service1/.+(mp4|mkv|m4a)$" {
rewrite "^/service1/(.+(mp4|mkv|m4a))$" "$1";
root /home/user1/Videos;
}
location ~ "^/service2/.+(mp4|mkv|m4a)$" {
rewrite "^/service2/(.+(mp4|mkv|m4a))$" "$1";
root /home/user2/Videos;
}
But this is not working. I am getting crazy.
We use symfony with the platform api in docker and we have a problem with varnish.
For local development varnish works with the default.vcl configuration file (https://github.com/api-platform/api-platform/blob/master/api/docker/varnish/conf/default.vcl). When uploading to the server, varnish provides the error "Error (null) Backend fetch failed"
When disabling varnish and redirecting to nginx with php-fpm, api-platform works properly.
I increased the http_resp_hdr_len parameters to 131072 bytes (128K) and http_rasp_size to 10485760 bytes (10Mb), but it doesn't help, and the error remains.
docker command to start varnish:
CMD ["varnishd", "-F", "-f", "/etc/varnish/default.vcl", "-p", "http_resp_hdr_len=131072", "-p", "http_resp_size=10485760"]
Also the parameter - .first_byte_timeout = 600s; was added to default.vcl varnish
default.vcl
vcl 4.0;
import std;
backend default {
.host = "api";
.port = "80";
.first_byte_timeout = 600s;
# Health check
.probe = {
.url = "/";
.timeout = 5s;
.interval = 50s;
.window = 5;
.threshold = 3;
}
}
# Hosts allowed to send BAN requests
acl invalidators {
"localhost";
"php";
# local Kubernetes network
"10.0.0.0"/8;
"172.16.0.0"/12;
"192.168.0.0"/16;
}
sub vcl_recv {
if (req.restarts > 0) {
set req.hash_always_miss = true;
}
# Remove the "Forwarded" HTTP header if exists (security)
unset req.http.forwarded;
# To allow API Platform to ban by cache tags
if (req.method == "BAN") {
if (client.ip !~ invalidators) {
return (synth(405, "Not allowed"));
}
if (req.http.ApiPlatform-Ban-Regex) {
ban("obj.http.Cache-Tags ~ " + req.http.ApiPlatform-Ban-Regex);
return (synth(200, "Ban added"));
}
return (synth(400, "ApiPlatform-Ban-Regex HTTP header must be set."));
}
# For health checks
if (req.method == "GET" && req.url == "/healthz") {
return (synth(200, "OK"));
}
}
sub vcl_hit {
if (obj.ttl >= 0s) {
# A pure unadulterated hit, deliver it
return (deliver);
}
if (std.healthy(req.backend_hint)) {
# The backend is healthy
# Fetch the object from the backend
return (restart);
}
# No fresh object and the backend is not healthy
if (obj.ttl + obj.grace > 0s) {
# Deliver graced object
# Automatically triggers a background fetch
return (deliver);
}
# No valid object to deliver
# No healthy backend to handle request
# Return error
return (synth(503, "API is down"));
}
sub vcl_deliver {
# Don't send cache tags related headers to the client
unset resp.http.url;
# Comment the following line to send the "Cache-Tags" header to the client (e.g. to use CloudFlare cache tags)
unset resp.http.Cache-Tags;
}
sub vcl_backend_response {
# Ban lurker friendly header
set beresp.http.url = bereq.url;
# Add a grace in case the backend is down
set beresp.grace = 1h;
}
Please advise what might be a problem with varnish and how to make it work correctly?
The problem was solved as follows - I took the parameters for running varnish from thomasmoreaumaster (https://github.com/api-platform/api-platform/issues/1367).
CMD ["varnishd", "-F", "-f", "/etc/varnish/default.vcl", "-p", "http_resp_hdr_len=128k", "-p", "http_resp_size=128k", "-p", "http_req_hdr_len=64k", "-p", "workspace_backend=256k", "-p", "workspace_client=256k", "-p", "http_max_hdr=256"]
Also in the proxying nginx api-platform removed directory binding with ssl and formed an nginx-proxy with ssl enabled via volume.
Now varnish works well.
Thanks for your help and support.
I installed dsc module and added AD user to Domain controller using puppet. Code below works fine when hard-coding password as plain text. Is it possible somehow to encrypt those passwords.
I read that hiera-eyaml is solution for this so i encrypted password
[root#PUPPET puppet]# /opt/puppetlabs/puppet/bin/eyaml encrypt -p
Enter password: **********
string: ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAl/+uUACl6WpGAnA1sSqEuTp39SVYfHc7J0BMvC+a2C0YzQg1V]
Then stored that encrypted pass in /etc/common.eyaml file (specified in hiera config file)
/opt/puppetlabs/puppet/bin/eyaml edit /etc/common.eyaml
I can decrypt the file successfully:
/opt/puppetlabs/puppet/bin/eyaml decrypt -f /etc/common.eyaml
Then i specified encrypted pass to manifest file
/etc/puppetlabs/code/environments/production/manifests/site.pp:
dsc_xADUser {'FirstUser':
dsc_ensure => 'present',
dsc_domainname => 'ad.contoso.com',
dsc_username => 'tfl',
dsc_userprincipalname => 'tfl#ad.contoso.com',
dsc_password => {
'user' => 'Administrator#ad.contoso.com',
'password' => Sensitive('pass')
},
dsc_passwordneverexpires => true,
dsc_domainadministratorcredential => {
'user' => 'Administrator#ad.contoso.com',
'password' => Sensitive(lookup('password'))
},
}
On windows node i got error
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Function lookup() did not find a value for the name 'password' on node windows.example.com
Hiera config file:
cat /etc/puppetlabs/puppet/hiera.yaml
---
# Hiera 5 Global configuration file
---
version: 5
defaults:
datadir: data
data_hash: yaml_data
hierarchy:
- name: "Eyaml hierarchy"
lookup_key: eyaml_lookup_key # eyaml backend
paths:
- "/etc/common.eyaml"
options:
pkcs7_private_key: "/etc/puppetlabs/puppet/keys/private_key.pkcs7.pem"
pkcs7_public_key: "/etc/puppetlabs/puppet/keys/public_key.pkcs7.pem"
cat /etc/common.eyaml
password: ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAl/+uUACl6WpGAnA1sSqEuTp39SVYfHc7J0BMvC+a2C0YzQg1V]
I'm new to Puppet and this hiera is confusing me
For starters, there is a typo in your Hiera config file. The path to the data should be:
paths:
- "/etc/common.eyaml"
After fixing that, you need to retrieve the value from Hiera. This is performed with the puppet lookup function. Since you have a single key value pair here in a single data file, this can be performed with a minimal number of arguments.
dsc_xADUser {'FirstUser':
dsc_ensure => 'present',
dsc_domainname => 'ad.contoso.com',
dsc_username => 'tfl',
dsc_userprincipalname => 'tfl#ad.contoso.com',
dsc_password => {
'user' => 'Administrator#ad.contoso.com',
'password' => Sensitive('pass')
},
dsc_passwordneverexpires => true,
dsc_domainadministratorcredential => {
'user' => 'Administrator#ad.contoso.com',
'password' => lookup('string'),
},
}
However, you also really want to redact that password from your logs and reports. You would want to wrap that password String in a Sensitive data type.
'password' => Sensitive(lookup('string')),
You seem to already be doing that for your other password that is being passed in as a String pass.
A side note to all of this is that Puppet has intrinsic support for lookup retrievals from Vault and Conjur in version 6, so that will become best practices instead of hiera-eyaml soon.
Ufff, after much struggling finally got it working:
cat /etc/puppetlabs/puppet/hiera.yaml
---
version: 5
defaults:
datadir: data
data_hash: yaml_data
hierarchy:
- name: "Eyaml hierarchy"
lookup_key: eyaml_lookup_key # eyaml backend
paths:
- "nodes/%{trusted.certname}.yaml"
- "windowspass.eyaml"
options:
pkcs7_private_key: "/etc/puppetlabs/puppet/keys/private_key.pkcs7.pem"
pkcs7_public_key: "/etc/puppetlabs/puppet/keys/public_key.pkcs7.pem
Created password:
/opt/puppetlabs/puppet/bin/eyaml encrypt -l 'password' -s 'Pass' --pkcs7-public-key=/etc/puppetlabs/puppet/keys/public_key.pkcs7.pem --pkcs7-private-key=/etc/puppetlabs/puppet/keys/private_key.pkcs7.pem
Added it to /etc/puppetlabs/puppet/data/windowspass.eyaml file:
/opt/puppetlabs/puppet/bin/eyaml edit windowspass.eyaml --pkcs7-public-key=/etc/puppetlabs/puppet/keys/public_key.pkcs7.pem --pkcs7-private-key=/etc/puppetlabs/puppet/keys/private_key.pkcs7.pem
cat /etc/puppetlabs/puppet/data/windowspass.eyaml
---
password: ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAUopetXenh/+DN1+VesIZUI5y4k3kOTn2xa5uBrtGZP3GvGqoWfwAbYsfeNApjeMG+lg93/N/6mE9T59DPh]
Tested decryption:
/opt/puppetlabs/puppet/bin/eyaml decrypt -f windowspass.eyaml --pkcs7-public-key=/etc/puppetlabs/puppet/keys/public_key.pkcs7.pem --pkcs7-private-key=/etc/puppetlabs/puppet/keys/private_key.pkcs7.pem
As Matt suggested, mapped content of windowspass.eyaml to manifest file
'password' => Sensitive(lookup('password'))
Debugging command helped me a lot:
puppet master --debug --compile windows.example.com --environment=production
Thanks everyone, especially to Matt
i'm part of a team that is developing an application that uses the Fiware GE's has part of the Smart-AgriFood accelerator.
We are using the Orion Context Broker for gathering the data provided by the sensor network, and we intend to use the Pep-Proxy to authenticate the sensor node for access the Orion instance. We have tried the following pepProxy's:
https://github.com/telefonicaid/fiware-orion-pep
https://github.com/ging/fi-ware-pep-proxy
We only have success implementing the second (fi-ware-pep-proxy) implementation of the proxy. With the fiware-orion-pep we haven't been able to connect to the Keystone Global instance (account.lab.fi-ware.org), we have tried the account.lab... and the cloud.lab..., my question are:
1) is the keystone (IDM) instance for authentication the account.lab or the cloud.lab?? and what port's to use or address's?
2) is the fiware-orion-pep prepared for authenticate at the account.lab.fi-ware.org?? here is way i ask this:
This one works with the curl command at >> cloud.lab.fiware.org:4730/v2.0/tokens
{
"auth": {
"passwordCredentials": {
"username": "<my_user>",
"password": "<my_password>"
}
}
}'
This one does't work with the curl comand at >> account.lab.fi-ware.org:5000/v3/auth/tokens
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"domain": {
"name": "<my_domain>"
},
"name": "<my_user>",
"password": "<my_password>"
}
}
}
} }'
3) what is the implementation that i should be using for authenticate the devices or other calls to the Orion instance???
Here are the configuration that i used:
fiware-orion-pep
config.authentication = {
checkHeaders: true,
module: 'keystone',
user: '<my_user>',
password: '<my_password>',
domainName: '<my_domain>',
retries: 3,
cacheTTLs: {
users: 1000,
projectIds: 1000,
roles: 60
},
options: {
protocol: 'http',
host: 'account.lab.fiware.org',
port: 5000,
path: '/v3/role_assignments',
authPath: '/v3/auth/tokens'
}
};
fi-ware-pep-proxy (this one works), i have set the listing port to 1026 at the source code
var config = {};
config.account_host = 'https://account.lab.fiware.org';
config.keystone_host = 'cloud.lab.fiware.org';
config.keystone_port = 4731;
config.app_host = 'localhost';
config.app_port = '10026';
config.username = 'pepProxy';
config.password = 'pepProxy';
// in seconds
config.chache_time = 300;
config.check_permissions = false;
config.magic_key = undefined;
module.exports = config;
Thanks in advance for the time ... :)
The are currently some differences in how both PEP Proxies authenticate and validate against the global instances, so they do not behave in exactly the same way.
The one in telefonicaid/fiware-orion-pep was developed to fulfill the PEP Proxy requirements (authentication and validation against a Keystone and Access Control) in individual projects with their own Keystone and Keypass (a flavour of Access Control) installations, and so it evolved faster than the one in ging/fi-ware-pep-proxy and in a slightly different direction. As an example, the former supports multitenancy using the fiware-service and fiware-servicepath headers, while the latter is transparent to those mechanisms. This development direction meant also that the functionality slightly differs from time to time from the one in the global instance.
That being said, the concrete answer:
- Both PEP Proxies should be able to contact the global instance. If one doesn't, please, fill a bug in the issues of the Github repository and we will fix it as soon as possible.
- The ging/fi-ware-pep-proxy was specifically designed for accessing the global instance, so you should be able to use it as expected.
Please, if you try to proceed with the telefonicaid/fiware-orion-pep take note also that:
- the configuration flag authentication.checkHeaders should be false, as the global instance does not currently support multitenancy.
- current stable release (0.5.0) is about to change to next version (probably today) so maybe some of the problems will solve with the update.
Hope this clarify some of your doubts.
[EDIT]
1) I have already install the telefonicaid/fiware-orion-pep (v 0.6.0) from sources and from the rpm package created following the tutorial available in the github. When creating the rpm package, this is created with the following name pep-proxy-0.4.0_next-0.noarch.rpm.
2) Here is the configuration that i used:
/opt/fiware-orion-pep/config.js
var config = {};
config.resource = {
original: {
host: 'localhost',
port: 10026
},
proxy: {
port: 1026,
adminPort: 11211
} };
config.authentication = {
checkHeaders: false,
module: 'keystone',
user: '<##################>',
password: '<###################>',
domainName: 'admin_domain',
retries: 3,
cacheTTLs: {
users: 1000,
projectIds: 1000,
roles: 60
},
options: { protocol: 'http',
host: 'cloud.lab.fiware.org',
port: 4730,
path: '/v3/role_assignments',
authPath: '/v3/auth/tokens'
} };
config.ssl = {
active: false,
keyFile: '',
certFile: '' }
config.logLevel = 'DEBUG'; // List of component
config.middlewares = {
require: 'lib/plugins/orionPlugin',
functions: [
'extractCBAction'
] };
config.componentName = 'orion';
config.resourceNamePrefix = 'fiware:';
config.bypass = false;
config.bypassRoleId = '';
module.exports = config;
/etc/sysconfig/pepProxy
# General Configuration
############################################################################
# Port where the proxy will listen for requests
PROXY_PORT=1026
# User to execute the PEP Proxy with
PROXY_USER=pepproxy
# Host where the target Context Broker is located
# TARGET_HOST=localhost
# Port where the target Context Broker is listening
# TARGET_PORT=10026
# Maximum level of logs to show (FATAL, ERROR, WARNING, INFO, DEBUG)
LOG_LEVEL=DEBUG
# Indicates what component plugin should be loaded with this PEP: orion, keypass, perseo
COMPONENT_PLUGIN=orion
#
# Access Control Configuration
############################################################################
# Host where the Access Control (the component who knows the policies for the incoming requests) is located
# ACCESS_HOST=
# Port where the Access Control is listening
# ACCESS_PORT=
# Host where the authentication authority for the Access Control is located
# AUTHENTICATION_HOST=
# Port where the authentication authority is listening
# AUTHENTICATION_PORT=
# User name of the PEP Proxy in the authentication authority
PROXY_USERNAME=XXXXXXXXXXXXX
# Password of the PEP Proxy in the Authentication authority
PROXY_PASSWORD=XXXXXXXXXXXXX
In the files above i have tried the following parameters:
Keystone instance: account.lab.fiware.org or cloud.lab.fiware.org
User: pep or pepProxy or "user from fiware account"
Pass: pep or pepProxy or "user password from account"
Port: 4730, 4731, 5000
The result it's the same as before... the telefonicaid/fiware-orion-pep is unable to authenticate:
log file at /var/log/pepProxy/pepProxy
time=2015-04-13T14:49:24.718Z | lvl=ERROR | corr=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | trans=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | op=/v1/updateContext | msg=VALIDATION-GEN-003] Error connecting to Keystone authentication: KEYSTONE_AUTHENTICATION_ERROR: There was a connection error while authenticating to Keystone: 500
time=2015-04-13T14:49:24.721Z | lvl=DEBUG | corr=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | trans=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | op=/v1/updateContext | msg=response-time: 50745 statusCode: 500
result from the client console
{
"message": "There was a connection error while authenticating to Keystone: 500",
"name": "KEYSTONE_AUTHENTICATION_ERROR"
}
I'm doing something wrong here??