Fiware Orion - pepProxy - fiware-orion

i'm part of a team that is developing an application that uses the Fiware GE's has part of the Smart-AgriFood accelerator.
We are using the Orion Context Broker for gathering the data provided by the sensor network, and we intend to use the Pep-Proxy to authenticate the sensor node for access the Orion instance. We have tried the following pepProxy's:
https://github.com/telefonicaid/fiware-orion-pep
https://github.com/ging/fi-ware-pep-proxy
We only have success implementing the second (fi-ware-pep-proxy) implementation of the proxy. With the fiware-orion-pep we haven't been able to connect to the Keystone Global instance (account.lab.fi-ware.org), we have tried the account.lab... and the cloud.lab..., my question are:
1) is the keystone (IDM) instance for authentication the account.lab or the cloud.lab?? and what port's to use or address's?
2) is the fiware-orion-pep prepared for authenticate at the account.lab.fi-ware.org?? here is way i ask this:
This one works with the curl command at >> cloud.lab.fiware.org:4730/v2.0/tokens
{
"auth": {
"passwordCredentials": {
"username": "<my_user>",
"password": "<my_password>"
}
}
}'
This one does't work with the curl comand at >> account.lab.fi-ware.org:5000/v3/auth/tokens
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"domain": {
"name": "<my_domain>"
},
"name": "<my_user>",
"password": "<my_password>"
}
}
}
} }'
3) what is the implementation that i should be using for authenticate the devices or other calls to the Orion instance???
Here are the configuration that i used:
fiware-orion-pep
config.authentication = {
checkHeaders: true,
module: 'keystone',
user: '<my_user>',
password: '<my_password>',
domainName: '<my_domain>',
retries: 3,
cacheTTLs: {
users: 1000,
projectIds: 1000,
roles: 60
},
options: {
protocol: 'http',
host: 'account.lab.fiware.org',
port: 5000,
path: '/v3/role_assignments',
authPath: '/v3/auth/tokens'
}
};
fi-ware-pep-proxy (this one works), i have set the listing port to 1026 at the source code
var config = {};
config.account_host = 'https://account.lab.fiware.org';
config.keystone_host = 'cloud.lab.fiware.org';
config.keystone_port = 4731;
config.app_host = 'localhost';
config.app_port = '10026';
config.username = 'pepProxy';
config.password = 'pepProxy';
// in seconds
config.chache_time = 300;
config.check_permissions = false;
config.magic_key = undefined;
module.exports = config;
Thanks in advance for the time ... :)

The are currently some differences in how both PEP Proxies authenticate and validate against the global instances, so they do not behave in exactly the same way.
The one in telefonicaid/fiware-orion-pep was developed to fulfill the PEP Proxy requirements (authentication and validation against a Keystone and Access Control) in individual projects with their own Keystone and Keypass (a flavour of Access Control) installations, and so it evolved faster than the one in ging/fi-ware-pep-proxy and in a slightly different direction. As an example, the former supports multitenancy using the fiware-service and fiware-servicepath headers, while the latter is transparent to those mechanisms. This development direction meant also that the functionality slightly differs from time to time from the one in the global instance.
That being said, the concrete answer:
- Both PEP Proxies should be able to contact the global instance. If one doesn't, please, fill a bug in the issues of the Github repository and we will fix it as soon as possible.
- The ging/fi-ware-pep-proxy was specifically designed for accessing the global instance, so you should be able to use it as expected.
Please, if you try to proceed with the telefonicaid/fiware-orion-pep take note also that:
- the configuration flag authentication.checkHeaders should be false, as the global instance does not currently support multitenancy.
- current stable release (0.5.0) is about to change to next version (probably today) so maybe some of the problems will solve with the update.
Hope this clarify some of your doubts.

[EDIT]
1) I have already install the telefonicaid/fiware-orion-pep (v 0.6.0) from sources and from the rpm package created following the tutorial available in the github. When creating the rpm package, this is created with the following name pep-proxy-0.4.0_next-0.noarch.rpm.
2) Here is the configuration that i used:
/opt/fiware-orion-pep/config.js
var config = {};
config.resource = {
original: {
host: 'localhost',
port: 10026
},
proxy: {
port: 1026,
adminPort: 11211
} };
config.authentication = {
checkHeaders: false,
module: 'keystone',
user: '<##################>',
password: '<###################>',
domainName: 'admin_domain',
retries: 3,
cacheTTLs: {
users: 1000,
projectIds: 1000,
roles: 60
},
options: { protocol: 'http',
host: 'cloud.lab.fiware.org',
port: 4730,
path: '/v3/role_assignments',
authPath: '/v3/auth/tokens'
} };
config.ssl = {
active: false,
keyFile: '',
certFile: '' }
config.logLevel = 'DEBUG'; // List of component
config.middlewares = {
require: 'lib/plugins/orionPlugin',
functions: [
'extractCBAction'
] };
config.componentName = 'orion';
config.resourceNamePrefix = 'fiware:';
config.bypass = false;
config.bypassRoleId = '';
module.exports = config;
/etc/sysconfig/pepProxy
# General Configuration
############################################################################
# Port where the proxy will listen for requests
PROXY_PORT=1026
# User to execute the PEP Proxy with
PROXY_USER=pepproxy
# Host where the target Context Broker is located
# TARGET_HOST=localhost
# Port where the target Context Broker is listening
# TARGET_PORT=10026
# Maximum level of logs to show (FATAL, ERROR, WARNING, INFO, DEBUG)
LOG_LEVEL=DEBUG
# Indicates what component plugin should be loaded with this PEP: orion, keypass, perseo
COMPONENT_PLUGIN=orion
#
# Access Control Configuration
############################################################################
# Host where the Access Control (the component who knows the policies for the incoming requests) is located
# ACCESS_HOST=
# Port where the Access Control is listening
# ACCESS_PORT=
# Host where the authentication authority for the Access Control is located
# AUTHENTICATION_HOST=
# Port where the authentication authority is listening
# AUTHENTICATION_PORT=
# User name of the PEP Proxy in the authentication authority
PROXY_USERNAME=XXXXXXXXXXXXX
# Password of the PEP Proxy in the Authentication authority
PROXY_PASSWORD=XXXXXXXXXXXXX
In the files above i have tried the following parameters:
Keystone instance: account.lab.fiware.org or cloud.lab.fiware.org
User: pep or pepProxy or "user from fiware account"
Pass: pep or pepProxy or "user password from account"
Port: 4730, 4731, 5000
The result it's the same as before... the telefonicaid/fiware-orion-pep is unable to authenticate:
log file at /var/log/pepProxy/pepProxy
time=2015-04-13T14:49:24.718Z | lvl=ERROR | corr=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | trans=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | op=/v1/updateContext | msg=VALIDATION-GEN-003] Error connecting to Keystone authentication: KEYSTONE_AUTHENTICATION_ERROR: There was a connection error while authenticating to Keystone: 500
time=2015-04-13T14:49:24.721Z | lvl=DEBUG | corr=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | trans=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | op=/v1/updateContext | msg=response-time: 50745 statusCode: 500
result from the client console
{
"message": "There was a connection error while authenticating to Keystone: 500",
"name": "KEYSTONE_AUTHENTICATION_ERROR"
}
I'm doing something wrong here??

Related

control chalice IP connection to postgres

I built a small chalice app that is connected to Postgres that does some inserts. In the pg_hba.conf file (the database is on another server) I have allowed only certain IPs to connect. Almost every request from lambda uses a different IP.
this is my chalice app
import psycopg2.extras
from psycopg2.extras import execute_values
from chalice import Chalice, Response
app = Chalice(app_name='hello_world')
app.debug = True
conn = psycopg2.connect(user='user',
password='Password123',
host='123.12.12.123',
port=5432,
database='test_db')
cursor = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
#app.route("/")
def main_page():
cursor.execute("SELECT COUNT(*) FROM main WHERE status=1")
g = dict(cursor.fetchone())
return {"count": g['count']}
it works when I deploy local on 127.0.0.1 , is there a way to manage lambda IP when connecting to the database?
I am open to any suggestions
Create your vpc, private subnet, public subnets, security groups, etc.
Note: This is the challenging part.
Tutorial: https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html
Then copy the security-groups-id and subnets to chalice config: .chalice/config.json
{
"version": "2.0",
"app_name": "XYZ",
"stages": {
"prod": {
"security_group_ids": [
"sg-YYYYYYYY"
],
"subnet_ids": [
"subnet-XXXXXXXX"
]
}
}
}

Why I can't change the administrative state of a port on cisco apic via rest API

When I try to change the administrative state of a port on the Cisco APIC via rest API (aci_rest) then I'm getting the following error :
"msg": "APIC Error 170: Invalid access, MO: l1PhysIf",
"status": -1
Does anyone have any idea about that?
Thanks in advance.
- name: Change admin state of the port
aci_rest:
hostname: "{{ inventory_hostname }}"
username: "{{ aci_user }}"
password: "{{ aci_password }}"
validate_certs: no
path: "/api/node/mo/topology/pod-{{ pod_id }}/node-{{ node_id }}/sys/phys-[eth{{ interface }}].json"
method: post
content:
{
"l1PhysIf": {
"attributes": {
"adminSt":"down",
}
}
}
I've solved the problem. Cisco has restricted "l1PhysIf" object and there is documentation which looks like:
Class l1: PhysIf (CONCRETE)
Class ID:3627
Class Label: Layer 1 Physical Interface Configuration
Encrypted: false - Exportable: false - Persistent: true - Configurable: false - Subject to Quota: Disabled - Abstraction Layer: Concrete Model - APIC NX Processing: Disabled
Write Access: [NON CONFIGURABLE]
I've used "fabricRsOosPath" instead and it has worked.
Same thing here, but just trying to add a description. I get locking down a physical port object. Kinda. But a description?
The class mentioned above doesn't have any attribs that apply to a port description. Anyone have success at this?
I'm currently just using Postman to test, but will eventually throw this into python
API Post (port 10 for example):
https://{{url}}/api/node/mo/topology/pod-1/node-101/sys/phys-[eth1/10].json
JSON body:
{
"l1PhysIf":{
"attributes":{
"descr": "CHANGED"
}
}
}
Response: 400 Bad Request
{
"error": {
"attributes": {
"code": "170",
"text": "Invalid access, MO: l1PhysIf"
}
}
}

IdentityServer 4 in k8s behind loadbalancer

I've a identityserver deployed to kubernetes. I also konfigured google and facebook auth (see below). The HTTPS Termination is done but the K8s Ingress.
To get the identity still working with https i set forwarding rules (see below).
But from now on i get the following error and a HTTP 500 When a User tries to login. Terror occurs when the
System.InvalidOperationException: No authentication handler is
configured to handle the scheme: Identity.External
The line of code that triggers the error is in the account controller:
signInManager.ExternalLoginSignInAsync(provider, userIdClaim.Value, true);
My identity server startup looks like this:
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedProto | ForwardedHeaders.XForwardedProto,
ForwardLimit = null,
RequireHeaderSymmetry = false
});
app.UseIdentityServer();
app.UseGoogleAuthentication(new GoogleOptions
{
AuthenticationScheme = "Google",
DisplayName = "Google",
SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme,
ClientId = "dfdfsf",
ClientSecret = "-cf-"
});
app.UseStaticFiles();
app.UseMvcWithDefaultRoute();
what am I missing?

grunt-sftp-deploy unable to connect to server

I am a noob to grunt and would like to start using it.
Here is my gruntfile:
module.exports = function(grunt) {
// Project configuration.
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
devDir: 'dev/dir',
prodDir: 'prod/dir',
'sftp-deploy': {
prod: {
auth: {
host: 'server.com',
port: 22,
authKey: {
"username": "username1",
"password": "password2"
}
},
src: '<%=devDir%>',
dest: '/test/env/',
concurrency: 4,
progress: true
}
}
});
// load modules
grunt.loadNpmTasks('grunt-sftp-deploy');
// Default task(s).
grunt.registerTask('default', ['sftp-deploy']);
};
I am getting this error when i run 'grunt' in powershell:
Running "sftp-deploy:prod" (sftp-deploy) task
Logging in with username username1
Concurrency : 4
Fatal error: Connection :: error
What am I doing wrong?
thanks!
Ok, a few things to try... (sorry - a month late!)
run:
grunt sftp-deploy --verbose
This will give you a little more info regarding your error.
I solved my error after realising I couldn't create folders on my server, only upload files. So it might be worth testing that you can accomplish manually what your asking grunt to do.
Lastly, try moving your username / password into a .ftppass file
link [here] (https://www.npmjs.com/package/grunt-sftp-deploy)

Accessing config variables from other config files

I am having problems using in a config file a config var set in another config file. E.g.
// file - config/local.js
module.exports = {
mongo_db : {
username : 'TheUsername',
password : 'ThePassword',
database : 'TheDatabase'
}
}
// file - config/connections.js
module.exports.connections = {
mongo_db: {
adapter: 'sails-mongo',
host: 'localhost',
port: 27017,
user: sails.config.mongo_db.username,
password: sails.config.mongo_db.password,
database: sails.config.mongo_db.database
},
}
When I 'sails lift', I get the following error:
user: sails.config.mongo_db.username,
^
ReferenceError: sails is not defined
I can access the config variables in other places - e.g, this works:
// file - config/bootstrap.js
module.exports.bootstrap = function(cb) {
console.log('Dumping config: ', sails.config);
cb();
}
This dumps all the config settings to the console - I can even see the config settings for mongo_db in there!
I so confuse.
You can't access sails inside of config files, since Sails config is still being loaded when those files are processed! In bootstrap.js, you can access the config inside the bootstrap function, since that function gets called after Sails is loaded, but not above the function.
In any case, config/local.js gets merged on top of all the other config files, so you can get what you want this way:
// file - config/local.js
module.exports = {
connections: {
mongo_db : {
username : 'TheUsername',
password : 'ThePassword',
database : 'TheDatabase'
}
}
}
// file - config/connections.js
module.exports.connections = {
mongo_db: {
adapter: 'sails-mongo',
host: 'localhost',
port: 27017
},
}
If you really need to access one config file from another you can always use require, but it's not recommended. Since Sails merges config files together based on several factors (including the current environment), it's possible you'd be reading some invalid options. Best to do things the intended way: use config/env/* files for environment-specific settings (e.g. config/env/production.js), config/local.js for settings specific to a single system (like your computer) and the rest of the files for shared settings.