Job in queue is disappeared - queue

I have a NestJS server.
I am using Bull library in my project.
The problem I'm struggling with is jobs disappearing.
To fix this, I logged all queue event listeners.
like so
#OnQueueCompleted()
async onCompleted(job: Job) {
console.log(`Job completed: ${job.id}`);
}
#OnQueueFailed()
async onFailed(job, error) {
console.log(`Job failed: ${job.id} with error: ${error}`);
}
#OnQueueActive()
async onActive(job) {
console.log(`Job active: ${job.id}`);
}
#OnQueueError()
async onError(job) {
console.log(`Job onError: ${job}`);
}
#OnQueueDrained()
async onDrained() {
console.log(`Job onDrained`);
}
#OnQueuePaused()
async onPaused(job) {
console.log(`Job onPaused: ${job}`);
}
#OnQueueProgress()
async onProgress(job) {
console.log(`Job onProgress: ${job}`);
}
#OnQueueStalled()
async onStalled(job) {
console.log(`Job onStalled: ${job}`);
}
#OnQueueWaiting()
async onWaiting(job) {
console.log(`Job onWaiting: ${job}`);
}
#OnQueueRemoved()
async onRemoved(job) {
console.log(`Job onRemoved: ${job}`);
}
After register tasks in queue, I got the logs like next,
8|dev | Job onWaiting: 1340
8|dev | Job onWaiting: 1341
8|dev | Job onWaiting: 1342
8|dev | Job onWaiting: 1343
8|dev | Job onWaiting: 1344
8|dev | Job onWaiting: 1345
8|dev | Job active: 1340
8|dev | Job active: 1342
8|dev | Job completed: 1342
8|dev | Job completed: 1340
8|dev | Job onDrained
8|dev | Job onDrained
I totally have no idea.
How can I find cause of this problem?
Additional information
I am running two server which have different port (3001, 3002) in a EC2. These two server is sharing one redis server which is in localhost:6379. When I request to add queue to 3001 port, queue is added in 3001 port but tasks are processed in 3002 port.
What is the problem with me?

Related

Running "yarn hardhat deploy --tags mocks' to but its running other files. Do i need any specific dependencies? If so how can i check?

This is the file i am trying to deploy for testing, called "00-deploy-mocks":
const { network } = require("hardhat")
const {
developmentChains,
DECIMALS,
initialAnswer
} = require("../helper-hardhat-config")
module.exports = async ({ getNamedAccounts, deployments }) => {
const { deploy, log } = deployments
const { deployer } = getNamedAccounts()
const chainId = network.config.chainId
// If we are on a local development network, we need to deploy mocks!
if (developmentChains.includes(network.name)) {
log("Local network detected! Deploying mocks...")
await deploy("MockV3Aggregator", {
contract: "MockV3Aggregator",
from: deployer,
log: true,
args: [DECIMALS, initialAnswer]
})
log("Mocks Deployed!")
log("------------------------------------------------")
}
}
module.exports.tags = ["all", "mocks"]
This is the error im getting:
yarn run v1.22.15
warning package.json: No license field
$ /home/fbaqueriza/hh-fcc/hardhat-fund-me-fcc/node_modules/.bin/hardhat deploy --tags mocks
Nothing to compile
An unexpected error occurred:
Error: ERROR processing skip func of /home/fbaqueriza/hh-fcc/hardhat-fund-me-fcc/deploy/01-deploy-fund-me.js:
/home/fbaqueriza/hh-fcc/hardhat-fund-me-fcc/deploy/01-deploy-fund-me.js:29
const fundMe = await deploy("FundMe", {
^^^^^

AWS CDK Pipeline Error - No stack found matching "xxxxx"

I am having a hard time with the last CDK Pipeline I have deployed. I have followed the steps here:https://docs.aws.amazon.com/cdk/latest/guide/cdk_pipeline.html and the overall experience has been quite painful.
First of all I had to manually update the S3 bucket policy to let the pipeline read/write from the bucket it as I was otherwise getting denied access 403 errors.
That part seems resolved but now, in the "UpdatePipeline" stage, I am getting failures with that error message: Error: No stack found matching 'PTPipelineStack'. Use "list" to print manifest, when clearly, the Stack exists in CloudFormation and if I run the cdk list command from the CLI I do see the PTPipelineStack. I have destroyed the pipeline and redeployed it a few times "just in case", but didn't really help.
Any suggestion as to what be done to help with this?
bin/file.ts:
#!/usr/bin/env node
import * as cdk from '#aws-cdk/core'
import 'source-map-support/register'
import { MyPipelineStack } from '../lib/build-pipeline'
const app = new cdk.App()
const pipelineStack = new MyPipelineStack(app, 'PTPipelineStack', {
env: {
account: 'xxxxxxxxxxxx',
region: 'eu-west-1',
},
})
app.synth()
lib/build-pipeline.ts:
import * as codepipeline from '#aws-cdk/aws-codepipeline'
import * as codepipeline_actions from '#aws-cdk/aws-codepipeline-actions'
import { Construct, Stack, StackProps, Stage, StageProps } from '#aws-cdk/core'
import { CdkPipeline, SimpleSynthAction } from '#aws-cdk/pipelines'
import { PasstimeStack } from './passtime-stack'
export class MyApplication extends Stage {
constructor(scope: Construct, id: string, props?: StageProps) {
super(scope, id, props)
new PasstimeStack(this, 'Cognito')
}
}
export class MyPipelineStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props)
const sourceArtifact = new codepipeline.Artifact()
const cloudAssemblyArtifact = new codepipeline.Artifact()
const pipeline = new CdkPipeline(this, 'Pipeline', {
pipelineName: 'PassTimeAppPipeline',
cloudAssemblyArtifact,
sourceAction: new codepipeline_actions.BitBucketSourceAction({
actionName: 'Github',
connectionArn:
'arn:aws:codestar-connections:eu-west-1:xxxxxxxxxxxxxxx',
owner: 'owner',
repo: 'repo',
branch: 'dev',
output: sourceArtifact,
}),
synthAction: SimpleSynthAction.standardNpmSynth({
sourceArtifact,
cloudAssemblyArtifact,
installCommand: 'npm ci',
environment: {
privileged: true,
},
}),
})
pipeline.addApplicationStage(
new MyApplication(this, 'Dev', {
env: {
account: 'xxxxxxxx',
region: 'eu-west-1',
},
})
)
}
}
deps on my package.json:
"devDependencies": {
"#aws-cdk/assert": "^1.94.1",
"#types/jest": "^26.0.21",
"#types/node": "14.14.35",
"aws-cdk": "^1.94.1",
"jest": "^26.4.2",
"ts-jest": "^26.5.4",
"ts-node": "^9.0.0",
"typescript": "4.2.3"
},
"dependencies": {
"#aws-cdk/aws-appsync": "^1.94.1",
"#aws-cdk/aws-cloudfront": "^1.94.1",
"#aws-cdk/aws-cloudfront-origins": "^1.94.1",
"#aws-cdk/aws-codebuild": "^1.94.1",
"#aws-cdk/aws-codepipeline": "^1.94.1",
"#aws-cdk/aws-codepipeline-actions": "^1.94.1",
"#aws-cdk/aws-cognito": "^1.94.1",
"#aws-cdk/aws-dynamodb": "^1.94.1",
"#aws-cdk/aws-iam": "^1.94.1",
"#aws-cdk/aws-kms": "^1.94.1",
"#aws-cdk/aws-lambda": "^1.94.1",
"#aws-cdk/aws-lambda-nodejs": "^1.94.1",
"#aws-cdk/aws-pinpoint": "^1.94.1",
"#aws-cdk/aws-s3": "^1.94.1",
"#aws-cdk/aws-s3-deployment": "^1.94.1",
"#aws-cdk/core": "^1.94.1",
"#aws-cdk/custom-resources": "^1.94.1",
"#aws-cdk/pipelines": "^1.94.1",
"#aws-sdk/s3-request-presigner": "^3.9.0",
"source-map-support": "^0.5.16"
}
Code Build Logs:
[Container] 2021/03/19 17:43:59 Entering phase INSTALL
--
16 | [Container] 2021/03/19 17:43:59 Running command npm install -g aws-cdk
17 | /usr/local/bin/cdk -> /usr/local/lib/node_modules/aws-cdk/bin/cdk
18 | + aws-cdk#1.94.1
19 | added 193 packages from 186 contributors in 6.404s
20 |
21 | [Container] 2021/03/19 17:44:09 Phase complete: INSTALL State: SUCCEEDED
22 | [Container] 2021/03/19 17:44:09 Phase context status code: Message:
23 | [Container] 2021/03/19 17:44:09 Entering phase PRE_BUILD
24 | [Container] 2021/03/19 17:44:10 Phase complete: PRE_BUILD State: SUCCEEDED
25 | [Container] 2021/03/19 17:44:10 Phase context status code: Message:
26 | [Container] 2021/03/19 17:44:10 Entering phase BUILD
27 | [Container] 2021/03/19 17:44:10 Running command cdk -a . deploy PTPipelineStack --require-approval=never --verbose
28 | CDK toolkit version: 1.94.1 (build 60d8f91)
29 | Command line arguments: {
30 | _: [ 'deploy' ],
31 | a: '.',
32 | app: '.',
33 | 'require-approval': 'never',
34 | requireApproval: 'never',
35 | verbose: 1,
36 | v: 1,
37 | lookups: true,
38 | 'ignore-errors': false,
39 | ignoreErrors: false,
40 | json: false,
41 | j: false,
42 | debug: false,
43 | ec2creds: undefined,
44 | i: undefined,
45 | 'version-reporting': undefined,
46 | versionReporting: undefined,
47 | 'path-metadata': true,
48 | pathMetadata: true,
49 | 'asset-metadata': true,
50 | assetMetadata: true,
51 | 'role-arn': undefined,
52 | r: undefined,
53 | roleArn: undefined,
54 | staging: true,
55 | 'no-color': false,
56 | noColor: false,
57 | fail: false,
58 | all: false,
59 | 'build-exclude': [],
60 | E: [],
61 | buildExclude: [],
62 | ci: false,
63 | execute: true,
64 | force: false,
65 | f: false,
66 | parameters: [ {} ],
67 | 'previous-parameters': true,
68 | previousParameters: true,
69 | '$0': '/usr/local/bin/cdk',
70 | STACKS: [ 'PTPipelineStack' ],
71 | 'S-t-a-c-k-s': [ 'PTPipelineStack' ]
72 | }
73 | merged settings: {
74 | versionReporting: true,
75 | pathMetadata: true,
76 | output: 'cdk.out',
77 | app: '.',
78 | context: {},
79 | debug: false,
80 | assetMetadata: true,
81 | requireApproval: 'never',
82 | toolkitBucket: {},
83 | staging: true,
84 | bundlingStacks: [ '*' ],
85 | lookups: true
86 | }
87 | Toolkit stack: CDKToolkit
88 | Setting "CDK_DEFAULT_REGION" environment variable to eu-west-1
89 | Resolving default credentials
90 | Looking up default account ID from STS
91 | Default account ID: xxxxxx
92 | Setting "CDK_DEFAULT_ACCOUNT" environment variable to xxxxxxxxx
93 | context: {
94 | 'aws:cdk:enable-path-metadata': true,
95 | 'aws:cdk:enable-asset-metadata': true,
96 | 'aws:cdk:version-reporting': true,
97 | 'aws:cdk:bundling-stacks': [ '*' ]
98 | }
99 | --app points to a cloud assembly, so we bypass synth
100 | No stack found matching 'PTPipelineStack'. Use "list" to print manifest
101 | Error: No stack found matching 'PTPipelineStack'. Use "list" to print manifest
102 | at CloudAssembly.selectStacks (/usr/local/lib/node_modules/aws-cdk/lib/api/cxapp/cloud-assembly.ts:115:15)
103 | at CdkToolkit.selectStacksForDeploy (/usr/local/lib/node_modules/aws-cdk/lib/cdk-toolkit.ts:385:35)
104 | at CdkToolkit.deploy (/usr/local/lib/node_modules/aws-cdk/lib/cdk-toolkit.ts:111:20)
105 | at initCommandLine (/usr/local/lib/node_modules/aws-cdk/bin/cdk.ts:208:9)
106 |
107 | [Container] 2021/03/19 17:44:10 Command did not exit successfully cdk -a . deploy PTPipelineStack --require-approval=never --verbose exit status 1
108 | [Container] 2021/03/19 17:44:10 Phase complete: BUILD State: FAILED
109 | [Container] 2021/03/19 17:44:10 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: cdk -a . deploy PTPipelineStack --require-approval=never --verbose. Reason: exit status 1
110 | [Container] 2021/03/19 17:44:10 Entering phase POST_BUILD
111 | [Container] 2021/03/19 17:44:10 Phase complete: POST_BUILD State: SUCCEEDED
112 | [Container] 2021/03/19 17:44:10 Phase context status code: Message:
I ran into the same issue and I'm not sure exactly how I fixed it, but here's some things to try:
Make sure you have your dev branch pushed to Github and not just correctly locally because that's what your pipeline is pointing to. (this was my problem)
I was using 1.94.1 but swapped to 1.94.0 - not sure if this helped
I make my CDK versions all fixed so I remove the ^, so they don't conflict with different versions at some point
I finally had a breakthrough yesterday. The issue I outlined above was a consequence of an issue that started earlier in the pipeline, that was in fact lacking permissions to access the artifacts s3 bucket. The original error message that appeared at the Source stage:
Upload to S3 failed with the following error: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: xxxx; S3 Extended Request ID: xxxx; Proxy: null) (Service: null; Status Code: 0; Error Code: null; Request ID: null; S3 Extended Request ID: null; Proxy: null) (Service: null; Status Code: 0; Error Code: null; Request ID: null; S3 Extended Request ID: null; Proxy: null)
I had unblocked the pipeline by creating a bucket policy on the artifact bucket but as stated previously that actually only pushed the issue further down the line. But focusing on the original issue I actually realised that the CDK was not granting sufficient permissions to one of the roles it created.
As of today, in order to use a Github repo with an organisation one needs to use the "Github v2" integration, that relies on CodeStar. (v1 = access tokens = private repos).
Currently the only way to set this up with the CDK is to use the BitBucketSourceAction as seen in my code above.
Interestingly, when deploying a new pipeline stack, the CDK creates the dedicated IAM role and grants the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "codestar-connections:UseConnection",
"Resource": "arn:aws:codestar-connections:eu-west-1:xxxxx:connection/xxxx",xx
"Effect": "Allow"
},
{
"Action": [
"s3:GetObject*",
"s3:GetBucket*",
"s3:List*",
"s3:DeleteObject*",
"s3:PutObject",
"s3:Abort*"
],
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/*"
],
"Effect": "Allow"
},
{
"Action": [
"kms:Decrypt",
"kms:DescribeKey",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*"
],
"Resource": "arn:aws:kms:eu-west-1:xxxxxxx:key/xxxxx",
"Effect": "Allow"
}
]
}
This looks ok at first but turns out to not be sufficient for the pipeline to access the bucket and go through the stages. I suspect that it is missing PutBucketPolicy permissions. I have currently fixed it by replacing the specific actions with a s3:*, but that should be fine tuned.
In the end I am using the latest and greatest 1.94.1, it is not a deps issue but a CDK one. I will raise an issue with the aws-cdk gang. 👍

The value supplied for parameter 'instanceProfileName' is not valid

Running cdk deploy I receive the following error message:
CREATE_FAILED | AWS::ImageBuilder::InfrastructureConfiguration | TestInfrastructureConfiguration The value supplied for parameter 'instanceProfileName' is not valid. The provided instance profile does not exist. Please specify a different instance profile and try again. (Service: Imagebuilder, Status Code: 400, Request ID: 41f431d7-8544-48e9-9faf-a870b83b0100, Extended Request ID: null)
The C# code looks like this:
var instanceProfile = new CfnInstanceProfile(this, "TestInstanceProfile", new CfnInstanceProfileProps {
InstanceProfileName = "test-instance-profile",
Roles = new string[] { "TestServiceRoleForImageBuilder" }
});
var infrastructureConfiguration = new CfnInfrastructureConfiguration(this, "TestInfrastructureConfiguration", new CfnInfrastructureConfigurationProps {
Name = "test-infrastructure-configuration",
InstanceProfileName = instanceProfile.InstanceProfileName,
InstanceTypes = new string[] { "t2.medium" },
Logging = new CfnInfrastructureConfiguration.LoggingProperty {
S3Logs = new CfnInfrastructureConfiguration.S3LogsProperty {
S3BucketName = "s3-test-assets",
S3KeyPrefix = "ImageBuilder/Logs"
}
},
SubnetId = "subnet-12f3456f",
SecurityGroupIds = new string[] { "sg-12b3e4e5b67f8900f" }
});
The TestServiceRoleForImageBuilder exists and was working previously. Same code was running successfully about a month ago. Any suggestions?
If I remove the CfninfrastructureConfiguration creation part, deployment runs successfully:, but takes at least 2 minutes to complete.
AwsImageBuilderStack: deploying...
AwsImageBuilderStack: creating CloudFormation changeset...
0/3 | 14:24:37 | REVIEW_IN_PROGRESS | AWS::CloudFormation::Stack | AwsImageBuilderStack User Initiated
0/3 | 14:24:43 | CREATE_IN_PROGRESS | AWS::CloudFormation::Stack | AwsImageBuilderStack User Initiated
0/3 | 14:24:47 | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata/Default (CDKMetadata)
0/3 | 14:24:47 | CREATE_IN_PROGRESS | AWS::IAM::InstanceProfile | TestInstanceProfile
0/3 | 14:24:47 | CREATE_IN_PROGRESS | AWS::IAM::InstanceProfile | TestInstanceProfile Resource creation Initiated
1/3 | 14:24:48 | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata/Default (CDKMetadata) Resource creation Initiated
1/3 | 14:24:48 | CREATE_COMPLETE | AWS::CDK::Metadata | CDKMetadata/Default (CDKMetadata)
1/3 Currently in progress: AwsImageBuilderStack, TestInstanceProfile
3/3 | 14:26:48 | CREATE_COMPLETE | AWS::IAM::InstanceProfile | TestInstanceProfile
3/3 | 14:26:49 | CREATE_COMPLETE | AWS::CloudFormation::Stack | AwsImageBuilderStack
Is it probably some race condition? Should I use multiple stacks to achieve my goal?
Should it be possible to use a wait condition (AWS::CloudFormation::WaitCondition) to bypass the 2 minutes of creation time in case it is intended (AWS::IAM::InstanceProfile resources always take exactly 2 minutes to create)?
Environment
CDK CLI Version: 1.73.0
Node.js Version: 14.13.0
OS: Windows 10
Language (Version): C# (.NET Core 3.1)
Update
Since the cause seems to be AWS internal, I used a pre-created instance profile as a workaround. The profile can be either created through IAM Management Console or CLI. However it would be nice to have a proper solution.
You have to create a dependency between the two constructs. CDK does not infer it when using the optional name parameter, as opposed to the logical id (which doesn't seem to work in this situation).
infrastructureConfiguration.node.addDependency(instanceProfile)
Here are the relevant docs: https://docs.aws.amazon.com/cdk/api/latest/docs/core-readme.html#construct-dependencies

SailsJS Mongodb timeout

I have this controller code in my sails app:
let userId = req.body.userId;
User.findOne({ id: userId })
.then((user) => {
console.log('User found:', user);
return res.ok('It worked!');
}).catch((err) => {
sails.log.error('indexes - error', err);
return res.badRequest(err);
});
When I start my server it works, but then after some times (~5min) it stops working and I end up having the following error message:
web_1 | Sending 400 ("Bad Request") response:
web_1 | Error (E_UNKNOWN) :: Encountered an unexpected error
web_1 | MongoError: server 13.81.244.244:27017 received an error {"name":"MongoError","message":"read ETIMEDOUT"}
web_1 | at null.<anonymous> (/myapp/node_modules/sails-mongo/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/server.js:213:40)
web_1 | at g (events.js:260:16)
web_1 | at emitTwo (events.js:87:13)
web_1 | at emit (events.js:172:7)
web_1 | at null.<anonymous> (/myapp/node_modules/sails-mongo/node_modules/mongodb/node_modules/mongodb-core/lib/connection/pool.js:119:12)
web_1 | at g (events.js:260:16)
web_1 | at emitTwo (events.js:87:13)
web_1 | at emit (events.js:172:7)
web_1 | at Socket.<anonymous> (/myapp/node_modules/sails-mongo/node_modules/mongodb/node_modules/mongodb-core/lib/connection/connection.js:154:93)
web_1 | at Socket.g (events.js:260:16)
web_1 | at emitOne (events.js:77:13)
web_1 | at Socket.emit (events.js:169:7)
web_1 | at emitErrorNT (net.js:1269:8)
web_1 | at nextTickCallbackWith2Args (node.js:511:9)
web_1 | at process._tickDomainCallback (node.js:466:17)
web_1 |
web_1 | Details: MongoError: server 13.81.244.244:27017 received an error {"name":"MongoError","message":"read ETIMEDOUT"}
The DB is still up at this point, and looking at the logs, everything seems fine on this side:
2017-03-23T16:45:51.664+0000 I NETWORK [thread1] connection accepted from 13.81.243.59:51558 #7811 (39 connections now open)
2017-03-23T16:45:51.664+0000 I NETWORK [conn7811] received client metadata from 13.81.253.59:51558 conn7811: { driver: { name: "nodejs", version: "2.2.25" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.4.0-62-generic" }, platform: "Node.js v4.7.3, LE, mongodb-core: 2.1.9" }
2017-03-23T16:45:51.723+0000 I ACCESS [conn7811] Successfully authenticated as principal username on dbname
My connections.js hook looks like this:
module.exports.connections = {
sailsmongo: {
adapter : 'sails-mongo',
host : process.env.MONGODB_HOST,
port : 27017,
user : process.env.MONGODB_USERNAME,
password : process.env.MONGODB_PASSWORD,
database : process.env.MONGODB_DBNAME
},
}
and in package.json:
"sails": "~0.12.4",
"sails-mongo": "^0.12.1",
Notes:
Among the unconfirmed possible sources of misbehavior, I see:
the app is dockerized
I have a query that takes quite some time (~1/2min) and call several child processes, so I'd suspect some memory leak out there, though this is still unconfirmed!
Any idea on this?
EDIT:
After some digging, I have the impression, looking at the DB logs that sails/waterline opens a new connection on each query while it should be plugged once and kept alive. This would be the cause of the issue.
From this, I decided to try along Mongoose, and bingo, in Mongoose it works like a charm.
I'm guessing this is a Sails/Waterline bug occurring then, though I'm not clear on how to reproduce this correctly.
Anyway, I'm now moving my app from Waterline to Mongoose.

Can celery's beat tasks execute at timed intervals?

This is the beat tasks setting:
celery_app.conf.update(
CELERYBEAT_SCHEDULE = {
'taskA': {
'task': 'crawlerapp.tasks.manual_crawler_update',
'schedule': timedelta(seconds=3600),
},
'taskB': {
'task': 'crawlerapp.tasks.auto_crawler_update_day',
'schedule': timedelta(seconds=3600),
},
'taskC': {
'task': 'crawlerapp.tasks.auto_crawler_update_hour',
'schedule': timedelta(seconds=3600),
},
})
Normally taskA,taskB,taskC execute at the same time after my command celery -A myproj beat as the beat tasks. But now I want that taskA execute first,and then some time later taskB excute second,taskC excute at last.And after 3600 seconds they excute again.And after 3600 seconds they excute again.And after 3600 seconds they excute again. Is it possible?
Yeah, it's possible. Create a chain for all three tasks and then use this chained task for scheduling.
In your tasks.py file:
from celery import chain
chained_task = chain(taskA, taskB, taskC)
Then schedule the chained_task:
celery_app.conf.update(
CELERYBEAT_SCHEDULE = {
'chained_task': {
'task': 'crawlerapp.tasks.manual_crawler_update',
'schedule': timedelta(seconds=3600),
},
})
By this, your task will execute in order once in 3600 seconds.