So far I'm able to pull the config-repo files from gitlab using simple username/password in my local system and it works well. Now I'm moving stuffs to AWS-ECS(Fargate).
native profile works well, but I want to use git-uri and for that I must provide credentials to connect.
spring:
profiles: dev
cloud:
config:
server:
git:
uri: https://gitlab.com/<group>/<project>.git
clone-on-start: true
default-label: gitlabci-test
searchPaths: '{profile}'
username: ${gitlab-username}
password: ${gitlab-password}
How can I configure the config-server to pull credentials from AWS Parameter store or secret-manager? Any help would be appreciated.
Create a new Policy named GetParameters and attach it to current task role.
IAM -> create policy -> select 'System Manager' as service -> 'GetParameters' as Action(read type only) -> all Resources and create policy.
Go to Systems Manager -> Parameter Store for storing sensitive details as SecureString.
Go to Task -> Container Definitions -> Environment Variables:
provide
The value should be in the form for arn:aws:ssm:<your-aws-acccount-region>:<aws-user-id>:parameter/name
GITLAB_USERNAME, ValueFrom , arn:aws:ssm:::parameter/dev/my-config-server/GITLAB_USERNAME
GITLAB_PASSWORD, ValueFrom , arn:aws:ssm:::parameter/dev/my-config-server/GITLAB_PASSWORD
As per convention Name should be in the form of /<environment>/<service>/<attribute-name>
And that's it. You are done. Wait for task to be provisioned and config-server would be able to connect to your remote repo.
spring:
profiles: dev
cloud:
config:
server:
git:
uri: https://gitlab.com/<group>/<project>.git
clone-on-start: true
default-label: gitlabci-test
searchPaths: '{profile}'
username: ${GITLAB_USERNAME}
password: ${GITLAB_PASSWORD}
Related
I am in the process of switching our Kubernetes Kong api manager over to using LDAP for the Kong admin gui log in.
Using the following in our deploy manifest I am able to log in with the user "kong_admin":
- name: KONG_ADMIN_GUI_AUTH
value: basic-auth
When I switch over to using LDAP with the following in the manifest I can no longer log in with the user "kong_admin":
- name: KONG_ADMIN_GUI_AUTH
value: ldap-auth-advanced
The fear is that if our LDAP server is down then how can we log into the Kong admin gui?
FYI - I am currently waiting for IT to open the port for our LDAP server so I am currently running into this now when I update the manifest.
Is there something I am overlooking or another way to log in if LDAP is down/unreachable?
There is a similar question but it does not use AWS::ApiGatewayV2::Stage, and I need the AutoDeploy that only the V2 seems to provide.
How do I enable CloudWatch logs and log full message data (as per the image) using CloudFormation in an AWS API Gateway?
I can't find anything at the documentation for the Stage
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-deploymentid
I am using an autodeployed stage. I am able to create the log groups, the IAM role to write logs in CloudWatch, but I can't enable the logging itself.
wsApiGateway:
Type: AWS::ApiGatewayV2::Api
Properties:
Name: foo-ws-gateway
Description: Api Gateway for Websockets
ProtocolType: WEBSOCKET
RouteSelectionExpression: $request.body.action
DisableExecuteApiEndpoint: true # I use custom domain
# routes and integrations ommitted.
wsApiStage:
Type: AWS::ApiGatewayV2::Stage
DependsOn:
- wsConnectRoute
- wsSendRoute
- wsDisconnectRoute
Properties:
StageName: production
Description: Autodeploy in production
AutoDeploy: true
ApiId: !Ref wsApiGateway
AccessLogSettings:
DestinationArn: !GetAtt wsApiGatewayLogGroup.Arn
Format: '{"requestTime":"$context.requestTime","requestId":"$context.requestId","httpMethod":"$context.httpMethod","path":"$context.path","routeKey":"$context.routeKey","status":$context.status,"responseLatency":$context.responseLatency, "responseLength":$context.responseLength, "integrationError":$context.integration.error}'
I also had to go to ApiGateway previous version to define the Account so that I could specify the IAM role ARN that has write access to CloudWatch logs in the account (The section on Settings at,the console's API Gateway). It doesn't seem to have a AWS::ApiGateway2::Account.
apiGatewayAccountConfig:
Type: "AWS::ApiGateway::Account"
Properties:
CloudWatchRoleArn: !GetAtt apiGatewayWatchLogsRole.Arn
How do I enable CloudWatch logs and log full message data (as per the image) using CloudFormation in an AWS API Gateway?
You can't. Execution logs are not supported by HTTP API (i.e. ApiGatewayV2) as explained by AWS here:
HTTP APIs currently support access logging only, and logging setup is different for these APIs. For more information, see Configuring logging for an HTTP API.
Now I am using github actions as my project CI, I want to do some unit test when using github actions build my project. When I using unit test, the project must using database, now my database have a white list and only IP in white list could connect my database, but now when I run unit test in GitHub Actions, I did not know the GitHub Actions IP address. Is it possible to use a static ip or any other way to solve the problem? I am not want to any IP could connect my database, it may have a security problem. any suggestion?
This is currently only possible with a self-hosted runner on a VM you can control the IP address of.
See also:
About self-hosted runners.
Alternatively, your GitHub action workflow may be able to adjust the firewall settings as part of the run.
Or you could use something like SQL Server LocalDB or SQLLite to connect to the database locally on the runner. Or spin up a temporary DB in a cloud environment, open it up to the runner and throw it away afterwards.
Or you could use a VPN client to connect to actions runner to your environment. You can install anything you want on the runner.
You can dynamically retrieve the GitHub Actions runner's IP address during your workflow using the public-ip action and update your RDS instance's security group ingress rules before and after your unit test steps.
This will allow you to use GitHub's hosted runners with your workflow instead of hosting your own.
Note: You will need to also set AWS credentials on your runner with permissions to update the associated security group. Also, you need to make sure the RDS instance is in a public subnet with an Internet Gateway attached and security group attached to it.
Your workflow should look something like this:
deploy:
name: deploy
runs-on: ubuntu-latest
env:
AWS_INSTANCE_SG_ID: <your-rds-subnet-sg-id>
steps:
- name: configure aws credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: <your-ci-aws-access-key>
aws-secret-access-key: <your-ci-aws-secret-key>
aws-region: <your-rds-aws-region>
- name: get runner ip addresses
id: ip
uses: haythem/public-ip#v1.2
- name: whitelist runner ip address
run: |
aws ec2 authorize-security-group-ingress \
--group-id $AWS_INSTANCE_SG_ID \
--protocol tcp \
--port 22 \
--cidr ${{ steps.ip.outputs.ipv4 }}/32
- name: connect to your rds instance and run tests
run: |
...run tests...
- name: revoke runner ip address
run: |
aws ec2 revoke-security-group-ingress \
--group-id $AWS_INSTANCE_SG_ID \
--protocol tcp \
--port 22 \
--cidr ${{ steps.ip.outputs.ipv4 }}/32
Ideally though you would run your integration tests in an EC2 within the same VPC as your RDS instance to avoid publicly exposing your RDS instance.
This is in beta (September 1, 2022) but it is possible to assign static IP address to runners:
Fixed IP ranges to provide access to runners via allow list services
Setup a fixed IP range for your machines by simply ticking a check box, this provides an IP range that can be allow listed in internal systems and in GitHub’s allow list to keep using Actions while making your GitHub environment more secure.
More details here
If your database happens to be Redis or PostgreSQL, GitHub Actions includes a built-in feature called Service Containers to spin up an ephemeral database in CI for testing purposes.
These databases are short-lived: after your job that uses it completes, the service container hosting the database is destroyed. You can either run the database in a container or directly on the virtual machine if desired.
For more info, see Creating PostgreSQL service containers in the GitHub Actions docs.
If you happen to be using another database, you can install do some more manual legwork to install and run it yourself.
I am trying to deploy a prometheus exporter with Azure DevOps, however, the configuration has a username and a password which I wish to populate through the Azure Pipelines as I dont want to store the credentials in my repository. My YAML config looks like this:
version: 3
max_repetitions: 25
timeout: 10s
auth:
security_level: authPriv
username: admin
password: password123
you'd need to use the token replace step or just a script that would do that. there is nothing build-in that would handle that for you. another alternative would be to use some sort of "offloading". so move these value to secrets and reference them from there and\or some sort of key vault
I know I can do this: https://docs.travis-ci.com/user/deployment/cloudfoundry
Now in .travis.yml, it will have
deploy:
edge: true
provider: cloudfoundry
username: hulk_hogan#example.com
password: supersecretpassword
api: https://api.run.pivotal.io
organization: myawesomeorganization
space: staging
Altough password can be encrypted by running
travis encrypt --add deploy.password
I don't want to put username and password(even it's encrypted) in yml file, is there another way for Travis to deploy apps to Cloud Foundry (or IBM Bluemix)?
There are several ways of passing credentials with Cloud Foundry. Putting them in your .yml file is just one option.
You can set them manually with the command cf set-env, as explained here: https://docs.run.pivotal.io/devguide/deploy-apps/environment-variable.html#view-env
If you are afraid of the CLI, Bluemix also allows you to create user-defined environment variable with its GUI : https://github.com/ibm-cds-labs/simple-data-pipe/wiki/Create-a-user-defined-environment-variable-in-Bluemix#use-the-bluemix-user-interface
I don't want to put username and password(even it's encrypted) in yml file
FYI, the .yml file does not leave your computer/CI server and is just read once by Cloud Foundry.