Create channel and write channel's logs into other file - symfony-2.3

I created channel into service container tags in services.yml
parameters:
restApiClass: "Telnet\ApiBundle\Services\RestApi"
services:
restApi:
class: "%restApiClass%"
arguments: [#logger]
tags:
- { name: monolog.logger, channel: rest_api }
then configured monolog's handlers to write this channel to different file
monolog:
handlers:
main:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%.log"
level: debug
channels: [!rest_api]
rest_api:
type: stream
path: "%kernel.logs_dir%/api.%kernel.environment%.error.log"
level: error
channels: [rest_api]
firephp:
type: firephp
level: info
chromephp:
type: chromephp
level: info
But I got an error:
InvalidArgumentException: Monolog configuration error: The logging
channel "rest_api" assigned to the "rest_api" handler does not exist.
What i need to do to make it works in the way that i want?
BTW I'm using Symfony 2.3 version with LTS

Hmm, silly me. I created bundle by hand, but forgot load it in AppKernel. Sorry =)

Related

Why does cloudformation give "Invalid method response 200" error, but manual deployment work? (AWS API Gateway Websocket)

I am getting this error when I deploy a simple websocket mock route.
Execution failed due to configuration error: Output mapping refers to an invalid method response: 200
First of all, I'm a little confused about what method response means, as in Websocket API, the terminology used is Route Response and Integration Response. I'm guessing this is referring to the Route Response.
The resources I have are:
Websocket API
Stage
Deployment
$connect route
$connect integration with mock (default maps to {"statusCode": 200})
$connect integration response (just passes the integration through)
$connect route response
The funny part is: to fix this, all I have to do is go to the console and click deploy API. I don't have to change any configuration. But that is not a good solution for me, as I want to run this on a CI/CD pipeline.
I'm guessing the problem is with the Route Response, as that is not configurable from the console. So something must be going on behind the scenes during console deployment, which I am missing during cloudformation deployment. Any ideas how to solve this?
Here's my Cloudformation Template.
Resources:
testWsApiBackendWsApi40DF2EE8:
Type: AWS::ApiGatewayV2::Api
Properties:
Name: testWsApi
ProtocolType: WEBSOCKET
RouteSelectionExpression: $request.body.action
testWsApiApiDeployment423ACBB9:
Type: AWS::ApiGatewayV2::Deployment
Properties:
ApiId:
Fn::GetAtt:
- testWsApiBackendWsApi40DF2EE8
- ApiId
DependsOn:
- MockWithAuthAwsStackwsMockRoute04DB7577
testWsApiApiStageF40CAAE0:
Type: AWS::ApiGatewayV2::Stage
Properties:
ApiId:
Fn::GetAtt:
- testWsApiBackendWsApi40DF2EE8
- ApiId
StageName: production
DeploymentId:
Fn::GetAtt:
- testWsApiApiDeployment423ACBB9
- DeploymentId
MockWithAuthAwsStackwsMockRoute04DB7577:
Type: AWS::ApiGatewayV2::Route
Properties:
ApiId:
Fn::GetAtt:
- testWsApiBackendWsApi40DF2EE8
- ApiId
RouteKey: $connect
Target:
Fn::Join:
- ""
- - integrations/
- Ref: MockWithAuthAwsStackwsMockIntegration36E7A460
MockWithAuthAwsStackwsMockIntegration36E7A460:
Type: AWS::ApiGatewayV2::Integration
Properties:
ApiId:
Fn::GetAtt:
- testWsApiBackendWsApi40DF2EE8
- ApiId
IntegrationType: MOCK
PassthroughBehavior: WHEN_NO_TEMPLATES
RequestTemplates:
$default: '{"statusCode":200}'
TemplateSelectionExpression: \$default
MockWithAuthAwsStackwsMockRouteResponseAEE0B8ED:
Type: AWS::ApiGatewayV2::RouteResponse
Properties:
ApiId:
Fn::GetAtt:
- testWsApiBackendWsApi40DF2EE8
- ApiId
RouteId:
Ref: MockWithAuthAwsStackwsMockRoute04DB7577
RouteResponseKey: $default
MockWithAuthAwsStackwsMockIntegrationResponse85928773:
Type: AWS::ApiGatewayV2::IntegrationResponse
Properties:
ApiId:
Fn::GetAtt:
- testWsApiBackendWsApi40DF2EE8
- ApiId
IntegrationId:
Ref: MockWithAuthAwsStackwsMockIntegration36E7A460
IntegrationResponseKey: $default
TemplateSelectionExpression: \$default
P.S I am actually using AWS CDK. The above template is the result of cdk synth. Let me know if you want to see the CDK code.
The reason why manual deployment using console works, while using cloudformation causes errors occasionally, is due to the order in which these resources are created. In the console, this is the order followed:
Routes, Integrations and Responses are created
They are associated with a stage
They are deployed to the specified stage
When using cloudformation, the order of creation of resources gets mixed up, resulting in them not being wired up properly. It seems that you have wired up the deployment to depend on the route being created first. You also need to make sure that the stage is created before the deployment. For this you could add an explicit DependsOn attribute, or an implicit reference to the stage within the deployment; perhaps in the stageName attribute as !Ref StageResource.
Or you could save the trouble and just add an autoDeploy: true to your stage, which will take care of the linking and order on its own.

Serverless version > 2.35 error in variable replacement to Cloudformation template

How are you?
I'm facing to a very odd error after upgrading my Serverless version from 2.35 to any newer version.
Using the exactly the same .yml that deploys in 2.35, in newer versions the following error is thrown:
ProviderARNs need to be valid Cognito Userpools:
Serverless Error ----------------------------------------
An error occurred: ApiGatewayCognitoAuthorizer - ProviderARNs need to be valid Cognito Userpools. Invalid ARNs-
arn:aws:cognito-idp:${file(./src/config/dev.json):REGION}:${file(./src/config/dev.json):AWS_ACCOUNT}:userpool/${file(./src/config/dev.json):COGNITO_POOL_ID} (Service: AmazonApiGateway; Status Code: 400; Error Code: BadRequestException; Request ID: e8403d66-ec5c-4ead-9528-308baed7640f; Proxy: null).
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: darwin
Node Version: 12.22.1
Framework Version: 2.50.0 (local)
Plugin Version: 5.4.4
SDK Version: 4.3.0
Components Version: 3.17.0
In the deep, the problem is that the CloudFormation template generated in the deployment is unable to resolve variables that 2.35 and previous versions resolved properly (specially those variables that depends on the config file), for example in my code:
ApiGatewayCognitoAuthorizer:
DependsOn:
- ApiGatewayRestApi
Type: AWS::ApiGateway::Authorizer
Properties:
Name: cognito-authorizer
IdentitySource: method.request.header.Authorization
ProviderARNs:
- "arn:aws:cognito-idp:${${self:custom.config}:REGION}:${${self:custom.config}:AWS_ACCOUNT}:userpool/${${self:custom.config}:COGNITO_POOL_ID}"
RestApiId:
Ref: ApiGatewayRestApi
Type: COGNITO_USER_POOLS
Same variable replacements are used in other resources, lambdas, etc. but the error is only thrown to API Cognito Authorizer, I don't understand...
Thank you all for your attention and help :)
Solved using https://www.serverless.com/framework/docs/environment-variables/, ie, loading environment variables from .env files instead custom .json files.
Thanks anyway to everybody.

How do I modify existing jobs to switch owner?

I installed Rundeck v3.3.5 (on CentOS 7 via RPM) to replace an old Rundeck instance that was decommissioned. I did the export/import of projects (which worked brilliantly) while connected to the new server as the default admin user. The imported jobs run properly on the correct schedule. I subsequently configured the new server to use LDAP authentication and configured ACLs for users/roles. That also works properly.
However, I see an error like this in the service.log:
ERROR services.NotificationService - Error sending notification email to foo#bar.com for Execution 9358 Error executing tag <g:render>: could not initialize proxy [rundeck.Workflow#9468] - no Session
My thought is to switch job owners from admin to a user that exists in LDAP. I mean, I would like to switch job owners regardless, but I'm also hoping it addresses the error.
Is there a way in the web interface or using rd that I can bulk-modify jobs to switch the owner?
It turns out that the error in the log was caused by notification settings in an included job. I didn't realize that notifications were configured on the parameterized shared job definition, but there were; removing the notification settings caused the error to stop being added to /var/log/rundeck/service.log.
To illustrate the problem, here are chunks of YAML I've edited to show just the important parts. Here's the common job:
- description: Do the actual work with arguments passed
group: jobs/common
id: a618ceb6-f966-49cf-96c5-03a0c2efb9d8
name: do_the_work
notification:
onstart:
email:
attachType: file
recipients: ops#company.com
subject: Actual work being started
notifyAvgDurationThreshold: null
options:
- enforced: true
name: do_the_job
required: true
values:
- yes
- no
valuesListDelimiter: ','
- enforced: true
name: fail_a_lot
required: true
values:
- yes
- no
valuesListDelimiter: ','
scheduleEnabled: false
sequence:
commands:
- description: The actual work
script: |-
#!/bin/bash
echo ${RD_OPTION_DO_THE_JOB} ${RD_OPTION_FAIL_A_LOT}
keepgoing: false
strategy: node-first
timeout: '60'
uuid: a618ceb6-f966-49cf-96c5-03a0c2efb9d8
And here's the job that calls it (the one that is scheduled and causes an error to show up in the log when it runs):
- description: Do the job
group: jobs/individual
name: do_the_job
...
notification:
onfailure:
email:
recipients: ops#company.com
subject: '[Rundeck] Failure of ${job.name}'
notifyAvgDurationThreshold: null
...
sequence:
commands:
- description: Call the job that does the work
jobref:
args: -do_the_job yes -fail_a_lot no
group: jobs/common
name: do_the_work
If I remove the notification settings from the common job, the error in the log goes away. I'm not sure if sending notifications from an included job is not supported. It would be useful to me if it was, so I could place notification settings in a single location. However, I can understand why it presents a problem for the scheduler/executor.

Serverless CloudFormation template error instance of Fn::GetAtt references undefined resource

I'm trying to setup a new repo and I keep getting the error
The CloudFormation template is invalid: Template error: instance of Fn::GetAtt
references undefined resource uatLambdaRole
in my uat stage, however the dev stage with the exact same format works fine.
I have a resource file for each of these environments.
dev
devLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: dev-lambda-role # The name of the role to be created in aws
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
#Documentation states the below policy is included automatically when you add VPC configuration but it is currently bugged.
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
uat
uatLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: uat-lambda-role # The name of the role to be created in aws
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
#Documentation states the below policy is included automatically when you add VPC configuration but it is currently bugged.
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
In my serverless.yml my role is defined as
role: ${self:custom.stage}LambdaRole
and the stage is set as
custom:
stage: ${opt:stage, self:provider.stage}
Running serverless deploy --stage dev --verbose succeeds, but running serverless deploy --stage uat --verbose fails with the error. Can anyone see what I'm doing wrong? The uat resource was copied directly from the dev one with only the stage name change.
Here is a screenshot of the directory the resource files are in
I had the same issue, eventually I discovered that my SQS queue name wasn't the same in all 3 places. The following 3 places that the SQS name should match are shown below:
...
functions:
mylambda:
handler: sqsHandler.handler
events:
- sqs:
arn:
Fn::GetAtt:
- mySqsName # <= Make sure that these match
- Arn
resources:
Resources:
mySqsName: # <= Make sure that these match
Type: "AWS::SQS::Queue"
Properties:
QueueName: "mySqsName" # <= Make sure that these match
FifoQueue: true
Ended up here with the same error message. My issue ended up being that I got the "resource" and "Resource" keys in serverless.yml backwards.
Correct:
resources: # <-- lowercase "r" first
Resources: # <-- uppercase "R" second
LambdaRole:
Type: AWS::IAM::Role
Properties:
...
🤦‍♂️
I missed copying a key part of my config here, the actual reference to my Resources file
resources:
Resources: ${file(./serverless-resources/${self:provider.stage}-resources.yml)}
The issue was that I had copied this from a guide and had accientally used self:provider.stage rather than self:custom.stage. When I changed this, it could then deploy.
Indentation Issue
In general, when YAML isn't working I start by checking the indentation.
I hit this issue in my case one of my resources was indented too much, therefore, putting the resource in the wrong node/object. The resources should be two indents in as they're in node resources sub-node Resources
For more info on this see yaml docs

Set up monolog to use rollbar with symfony3?

config.yml
handlers:
rollbar:
type: stream
token: '%rollbar.token%'
level: warning
bubble: true
config:
environment: '%rollbar.environment%'
main:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%.log"
level: debug
There is no activity on rollbar
$this->get('logger')->warning('testing rollbar');
I have tested their own code (using rollbar/rollbar) which works OK (from below instructions)
https://rollbar.com/docs/notifier/rollbar-php/#installation
I've got Rollbar on my own Symfony app, but it looks like you have the type set to just send it to the standard log files.
rollbar:
token: "%rollbar_serverside_token%"
type: "rollbar" ## This enables the Rollbar handler
config:
environment: "%kernel.environment%"
handler: blocking
level: debug
Since I've grouped the Rollbar handler under a 'fingers_crossed' main/default handler, it only throws items to the outside service if an exception occurs.