A few days ago, I run SailsJs app for the first time in production. This warning showed up.
Warning: connection.session() MemoryStore is not
designed for a production environment, as it will leak
memory, and will not scale past a single process.
I understand that a similar question has been asked and the answer seems to be periodically cleaning up sessionStore with code like.
function sessionCleanup() {
sessionStore.all(function(err, sessions) {
for (var i = 0; i < sessions.length; i++) {
sessionStore.get(sessions[i], function() {} );
}
});
}
How can I get reference to sessionStore in sails.js?
All what you need to do is just replace memory adapter in config/session.js with another adapter, Redis, for instance.
module.exports.session = {
secret: '<YOUR_SECRET>',
adapter: 'redis',
host: 'localhost',
port: 6379,
ttl: <REDIS_TTL_IN_SECONDS>,
pass: <REDIS_PASSWORD>
prefix: 'sess:'
};
Related
I'm trying to stand up a proof of concept that ingests an RTSP video stream into Kinesis Video. The provided documentation has a docker image all set up that seems to have everything I need to do this, hosted by AWS on 546150905175.dkr.ecr.us-west-2.amazonaws.com. What I am having trouble with, though, is getting that deployment (via an Amplify Custom category, in TypeScript CDK) to work.
I've tried different variations on
import * as iam from "#aws-cdk/aws-iam";
import * as ecs from "#aws-cdk/aws-ecs";
import * as ec2 from "#aws-cdk/aws-ec2";
const kinesisUserAccessKey = new iam.AccessKey(this, 'KinesisStreamUserAccessKey', {
user: kinesisStreamUser,
})
const servicePrincipal = new iam.ServicePrincipal('ecs-tasks.amazonaws.com');
const executionRole = new iam.Role(this, 'IngestVideoTaskDefExecutionRole', {
assumedBy: servicePrincipal,
managedPolicies: [
iam.ManagedPolicy.fromAwsManagedPolicyName('service-role/AmazonECSTaskExecutionRolePolicy'),
]
});
const taskDefinition = new ecs.FargateTaskDefinition(this, 'IngestVideoTaskDef', {
cpu: 512,
memoryLimitMiB: 1024,
executionRole,
})
const image = ecs.ContainerImage.fromRegistry('546150905175.dkr.ecr.us-west-2.amazonaws.com/kinesis-video-producer-sdk-cpp-amazon-linux:latest');
taskDefinition.addContainer('IngestVideoContainer', {
command: [
'gst-launch-1.0',
'rtspsrc',
`location="${locationParam.secretValue.toString()}"`,
'short-header=TRUE',
'!',
'rtph264depay',
'!',
'video/x-h264,',
'format=avc,alignment=au',
'!',
'kvssink',
`stream-name="${cfnStream.name}"`,
'storage-size=512',
`access-key="${kinesisUserAccessKey.accessKeyId}"`,
`secret-key="${kinesisUserAccessKey.secretAccessKey.toString()}"`,
`aws-region="${REGION}"`,
// `aws-region="${cdk.Aws.REGION}"`,
],
image,
logging: new ecs.AwsLogDriver({
streamPrefix: 'IngestVideoContainer',
}),
})
const service = new ecs.FargateService(this, 'IngestVideoService', {
cluster,
taskDefinition,
desiredCount: 1,
securityGroups: [
ec2.SecurityGroup.fromSecurityGroupId(this, 'DefaultSecurityGroup', SECURITY_GROUP_ID)
],
vpcSubnets: {
subnets: SUBNET_IDS.map(subnetId => ec2.Subnet.fromSubnetId(this, subnetId, subnetId)),
}
})
But it seems like regardless of what I do, an amplify push just stays in 'in progress' for like an hour until I go into the CloudFormation console and cancel the stack update, but deep in the my way to the ECS Console I managed to find an actual error message:
Resourceinitializationerror: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post "https://api.ecr.us-west-2.amazonaws.com/": dial tcp 52.94.177.118:443: i/o timeout
It seems to be some kind of networking issue, but I'm not sure how to proceed. Any assistance you can provide would be wonderful. Cheers!
Figured it out. For those stuck with similar issues, you have to give it an execution role with AmazonECSTaskExecutionRolePolicy, which I already edited above, and set assignPublicIp: true in the service.
I am currently using Prisma on a large scale project. When performing a complex query, the Prisma cluster often errors out with the following message in the docker-logs (I'm redacting the error for readability):
{"key":"error/unhandled","requestId":"local:ck7ditux500570716cl5f8x3r","clientId":"default$default","payload":{"exception":"java.util.concurrent.RejectedExecutionException: Task slick.basic.BasicBackend$DatabaseDef$$anon$3#552b85a4 rejected from slick.util.AsyncExecutor$$anon$1$$anon$2#1d4391f7[Running, pool size = 9, active threads = 9, queued tasks = 1000, completed tasks = 43440]","query":"query ($where: TaskWhereUniqueInput!)
{\n task(where: $where) {\n workflow {\n tasks {\n id\n state\n parentReq\n frozen\n parentTask {\n id\n state\n }\n }\n }\n }\n}\n","variables":
"{\"where\":{\"id\":\"ck6twx873bs550874ne867066\"}}",
"code":"0","stack_trace":"java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)\\n
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
...
"message":"Task slick.basic.BasicBackend$DatabaseDef$$anon$3#552b85a4 rejected from slick.util.AsyncExecutor$$anon$1$$anon$2#1d4391f7[Running, pool size = 9, active threads = 9, queued tasks = 1000, completed tasks = 43440]"}}
This is a frequent error when handling large queries. Has anyone come up with a way to configure Prisma or do internal batch operations to avoid this concurrency error?
Option 1:
I ran into this issue when running a prisma mutation in a loop with a lot of data which seemed to have created too many concurrent database operations.
My solution was to throttle the requests:
thingToLoop.map(() => {
await new Promise(resolve => setTimeout(resolve, 1000));
// Prisma operation here. Essentially this just makes the operation wait for one second between each operation.
})
Option 2
I've read other posts that say that you can set a limit to the number of concurrent connections as well.
See: https://v1.prisma.io/docs/1.25/prisma-server/database-connector-POSTGRES-jgfr/#overview
Essentially, this restricts the number of connections when too many are being created. See the following prisma config as an example:
PRISMA_CONFIG: |
managementApiSecret: __YOUR_MANAGEMENT_API_SECRET__
port: 4466
databases:
default:
connector: postgres
migrations: __ENABLE_DB_MIGRATIONS__
host: __YOUR_DATABASE_HOST__
port: __YOUR_DATABASE_PORT__
user: __YOUR_DATABASE_USER__
password: __YOUR_DATABASE_PASSWORD__
connectionLimit: __YOUR_CONNECTION_LIMIT__
I am trying to implement some integration tests using mocha for functions that interact with a postgres database through knex, in a nodejs express app.
The functions work outside of mocha - I can start the app in node or nodemon, submit requests through Postman, retrieve records from the database, add new records, etc. But when I try to test the code through mocha, I get errors like the following for any functions that try to access the database:
select * from "item" where "user_id" = $1 - relation "item" does not exist
The environment variable for the database connection is set to connect to the right database; when I manually test the app end-to-end, everything works; I get data back from the database.
I've included what I think are the relevant code snippets below: the test script for one of the tests that won't work, the function I'm trying to test, and the modules that that function relies on.
TEST SCRIPT
const Item = require('../db/item');
const chai = require('chai');
const chaiAsPromised = require('chai-as-promised');
// set up the middleware
chai.use(chaiAsPromised);
var should = require('chai').should()
describe('Item.getByUser', function() {
contex`enter code here`t('With valid id', function() {
const item_id = 1;
const expectedResult = "Canoe";
it('should return items', function() {
return Item
.getByUser(item_id)
.then(items => {
items[0].name.should.equal(expectedResult);
});
});
});
SNIPPET FROM THE ITEM.GETBYUSER FUNCTION:
const knex = require('./connection');
module.exports = {
getByUser: function(id) {
return knex('item').where('user_id', id);
},
SNIPPET FROM THE CONNECTION MODULE:
require('dotenv-safe').config();
const environment = process.env.NODE_ENV || 'development';
const config = require('../knexfile')[environment];
module.exports = require('knex')(config);
SNIPPET FROM THE KNEXFILE MODULE:
module.exports = {
development: {
client: 'pg',
connection: process.env.DATABASE_URL
},
production: {
client: 'pg',
connection: process.env.DATABASE_URL
}
};
The error message I get for the above test is:
1) Item.getByUser
With valid id
should return items:
select * from "item" where "user_id" = $1 - relation "item" does not exist
error: relation "item" does not exist
at Connection.parseE (node_modules\pg\lib\connection.js:567:11)
at Connection.parseMessage (node_modules\pg\lib\connection.js:391:17)
at Socket.<anonymous> (node_modules\pg\lib\connection.js:129:22)
at addChunk (_stream_readable.js:284:12)
at readableAddChunk (_stream_readable.js:265:11)
at Socket.Readable.push (_stream_readable.js:220:10)
at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17)
Okay, it's looking like this was in fact related to the environment variable for the database in some way I can't quite figure out. Although I had the database connection set to 'postgres://localhost/mydatabase' the database actually being used when I tested the db live, including migrating and seeding the database through knex commands, was 'postgres://localhost/username' - a database with the same name as the Owner of 'mydatabase'. But I think the mocha tests were trying to connect to mydatabase, which was still empty at that point, since the migrate and seed affected the the database 'username.'
So, I think this can be closed. I'll try to replicate the problem where I was connected to the wrong db; not sure how that could have happened, as I never intentionally set the connection, or any environment variable to 'postgres://localhost/username'.
i'm part of a team that is developing an application that uses the Fiware GE's has part of the Smart-AgriFood accelerator.
We are using the Orion Context Broker for gathering the data provided by the sensor network, and we intend to use the Pep-Proxy to authenticate the sensor node for access the Orion instance. We have tried the following pepProxy's:
https://github.com/telefonicaid/fiware-orion-pep
https://github.com/ging/fi-ware-pep-proxy
We only have success implementing the second (fi-ware-pep-proxy) implementation of the proxy. With the fiware-orion-pep we haven't been able to connect to the Keystone Global instance (account.lab.fi-ware.org), we have tried the account.lab... and the cloud.lab..., my question are:
1) is the keystone (IDM) instance for authentication the account.lab or the cloud.lab?? and what port's to use or address's?
2) is the fiware-orion-pep prepared for authenticate at the account.lab.fi-ware.org?? here is way i ask this:
This one works with the curl command at >> cloud.lab.fiware.org:4730/v2.0/tokens
{
"auth": {
"passwordCredentials": {
"username": "<my_user>",
"password": "<my_password>"
}
}
}'
This one does't work with the curl comand at >> account.lab.fi-ware.org:5000/v3/auth/tokens
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"domain": {
"name": "<my_domain>"
},
"name": "<my_user>",
"password": "<my_password>"
}
}
}
} }'
3) what is the implementation that i should be using for authenticate the devices or other calls to the Orion instance???
Here are the configuration that i used:
fiware-orion-pep
config.authentication = {
checkHeaders: true,
module: 'keystone',
user: '<my_user>',
password: '<my_password>',
domainName: '<my_domain>',
retries: 3,
cacheTTLs: {
users: 1000,
projectIds: 1000,
roles: 60
},
options: {
protocol: 'http',
host: 'account.lab.fiware.org',
port: 5000,
path: '/v3/role_assignments',
authPath: '/v3/auth/tokens'
}
};
fi-ware-pep-proxy (this one works), i have set the listing port to 1026 at the source code
var config = {};
config.account_host = 'https://account.lab.fiware.org';
config.keystone_host = 'cloud.lab.fiware.org';
config.keystone_port = 4731;
config.app_host = 'localhost';
config.app_port = '10026';
config.username = 'pepProxy';
config.password = 'pepProxy';
// in seconds
config.chache_time = 300;
config.check_permissions = false;
config.magic_key = undefined;
module.exports = config;
Thanks in advance for the time ... :)
The are currently some differences in how both PEP Proxies authenticate and validate against the global instances, so they do not behave in exactly the same way.
The one in telefonicaid/fiware-orion-pep was developed to fulfill the PEP Proxy requirements (authentication and validation against a Keystone and Access Control) in individual projects with their own Keystone and Keypass (a flavour of Access Control) installations, and so it evolved faster than the one in ging/fi-ware-pep-proxy and in a slightly different direction. As an example, the former supports multitenancy using the fiware-service and fiware-servicepath headers, while the latter is transparent to those mechanisms. This development direction meant also that the functionality slightly differs from time to time from the one in the global instance.
That being said, the concrete answer:
- Both PEP Proxies should be able to contact the global instance. If one doesn't, please, fill a bug in the issues of the Github repository and we will fix it as soon as possible.
- The ging/fi-ware-pep-proxy was specifically designed for accessing the global instance, so you should be able to use it as expected.
Please, if you try to proceed with the telefonicaid/fiware-orion-pep take note also that:
- the configuration flag authentication.checkHeaders should be false, as the global instance does not currently support multitenancy.
- current stable release (0.5.0) is about to change to next version (probably today) so maybe some of the problems will solve with the update.
Hope this clarify some of your doubts.
[EDIT]
1) I have already install the telefonicaid/fiware-orion-pep (v 0.6.0) from sources and from the rpm package created following the tutorial available in the github. When creating the rpm package, this is created with the following name pep-proxy-0.4.0_next-0.noarch.rpm.
2) Here is the configuration that i used:
/opt/fiware-orion-pep/config.js
var config = {};
config.resource = {
original: {
host: 'localhost',
port: 10026
},
proxy: {
port: 1026,
adminPort: 11211
} };
config.authentication = {
checkHeaders: false,
module: 'keystone',
user: '<##################>',
password: '<###################>',
domainName: 'admin_domain',
retries: 3,
cacheTTLs: {
users: 1000,
projectIds: 1000,
roles: 60
},
options: { protocol: 'http',
host: 'cloud.lab.fiware.org',
port: 4730,
path: '/v3/role_assignments',
authPath: '/v3/auth/tokens'
} };
config.ssl = {
active: false,
keyFile: '',
certFile: '' }
config.logLevel = 'DEBUG'; // List of component
config.middlewares = {
require: 'lib/plugins/orionPlugin',
functions: [
'extractCBAction'
] };
config.componentName = 'orion';
config.resourceNamePrefix = 'fiware:';
config.bypass = false;
config.bypassRoleId = '';
module.exports = config;
/etc/sysconfig/pepProxy
# General Configuration
############################################################################
# Port where the proxy will listen for requests
PROXY_PORT=1026
# User to execute the PEP Proxy with
PROXY_USER=pepproxy
# Host where the target Context Broker is located
# TARGET_HOST=localhost
# Port where the target Context Broker is listening
# TARGET_PORT=10026
# Maximum level of logs to show (FATAL, ERROR, WARNING, INFO, DEBUG)
LOG_LEVEL=DEBUG
# Indicates what component plugin should be loaded with this PEP: orion, keypass, perseo
COMPONENT_PLUGIN=orion
#
# Access Control Configuration
############################################################################
# Host where the Access Control (the component who knows the policies for the incoming requests) is located
# ACCESS_HOST=
# Port where the Access Control is listening
# ACCESS_PORT=
# Host where the authentication authority for the Access Control is located
# AUTHENTICATION_HOST=
# Port where the authentication authority is listening
# AUTHENTICATION_PORT=
# User name of the PEP Proxy in the authentication authority
PROXY_USERNAME=XXXXXXXXXXXXX
# Password of the PEP Proxy in the Authentication authority
PROXY_PASSWORD=XXXXXXXXXXXXX
In the files above i have tried the following parameters:
Keystone instance: account.lab.fiware.org or cloud.lab.fiware.org
User: pep or pepProxy or "user from fiware account"
Pass: pep or pepProxy or "user password from account"
Port: 4730, 4731, 5000
The result it's the same as before... the telefonicaid/fiware-orion-pep is unable to authenticate:
log file at /var/log/pepProxy/pepProxy
time=2015-04-13T14:49:24.718Z | lvl=ERROR | corr=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | trans=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | op=/v1/updateContext | msg=VALIDATION-GEN-003] Error connecting to Keystone authentication: KEYSTONE_AUTHENTICATION_ERROR: There was a connection error while authenticating to Keystone: 500
time=2015-04-13T14:49:24.721Z | lvl=DEBUG | corr=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | trans=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | op=/v1/updateContext | msg=response-time: 50745 statusCode: 500
result from the client console
{
"message": "There was a connection error while authenticating to Keystone: 500",
"name": "KEYSTONE_AUTHENTICATION_ERROR"
}
I'm doing something wrong here??
I am having problems using in a config file a config var set in another config file. E.g.
// file - config/local.js
module.exports = {
mongo_db : {
username : 'TheUsername',
password : 'ThePassword',
database : 'TheDatabase'
}
}
// file - config/connections.js
module.exports.connections = {
mongo_db: {
adapter: 'sails-mongo',
host: 'localhost',
port: 27017,
user: sails.config.mongo_db.username,
password: sails.config.mongo_db.password,
database: sails.config.mongo_db.database
},
}
When I 'sails lift', I get the following error:
user: sails.config.mongo_db.username,
^
ReferenceError: sails is not defined
I can access the config variables in other places - e.g, this works:
// file - config/bootstrap.js
module.exports.bootstrap = function(cb) {
console.log('Dumping config: ', sails.config);
cb();
}
This dumps all the config settings to the console - I can even see the config settings for mongo_db in there!
I so confuse.
You can't access sails inside of config files, since Sails config is still being loaded when those files are processed! In bootstrap.js, you can access the config inside the bootstrap function, since that function gets called after Sails is loaded, but not above the function.
In any case, config/local.js gets merged on top of all the other config files, so you can get what you want this way:
// file - config/local.js
module.exports = {
connections: {
mongo_db : {
username : 'TheUsername',
password : 'ThePassword',
database : 'TheDatabase'
}
}
}
// file - config/connections.js
module.exports.connections = {
mongo_db: {
adapter: 'sails-mongo',
host: 'localhost',
port: 27017
},
}
If you really need to access one config file from another you can always use require, but it's not recommended. Since Sails merges config files together based on several factors (including the current environment), it's possible you'd be reading some invalid options. Best to do things the intended way: use config/env/* files for environment-specific settings (e.g. config/env/production.js), config/local.js for settings specific to a single system (like your computer) and the rest of the files for shared settings.