JFrog Pipelines does not come up after an upgrade - jfrog-pipelines

In the logs, I see the following error:
2022-11-15T02:00:52.941Z [jfrou] [FATAL] [1914f17694f779cc] [bootstrap.go:99 ] [main ] [] - Cluster join: Failed resolving join key: Corrupted join key: encoding/hex: invalid byte: U+006E 'n'
I have uninstalled and installed but still facing the same problem

Check that the correct join key JF_SHARED_SECURITY_JOINKEY is added in /opt/jfrog/pipelines/var/etc/router/router_compose.yml and run pipelines restart

Related

[ERROR][o.o.a.a.AlertIndices] info deleteOldIndices

I am running an Opensearch cluster 2.3, and from the log I can see the following error:
[2023-02-13T03:37:44,711][ERROR][o.o.a.a.AlertIndices ] [opensearch-node1] info deleteOldIndices
What trigger this error? I've never set up the alert, and in the past on the same cluster I used to have some ISM policies for the indices, but I removed them all, can this be linked to the error I am seeing?
Thanks.

401 from vulnerability DB sync during trial

I just started a 30 day trial of Artifactory Pro and Xray on prem, stood up using docker-compose.
Most functionality is working fine, however when I try to sync the Xray vulnerability DB through the UI, it fails. Looking at the xray server service log I see:
2021-08-03T08:54:44.091Z [33m[jfxr ][0m [1m[31m[ERROR][0m [f1000c9d14bbcc48] [updates_job:389 ] [main ] Updates worker id 0 failed to download updates from https://jxray.jfrog.io/api/v1/updates/onboarding?version=3.25.1: failed to get online updates
--- at /go/src/jfrog.com/xray/internal/jobs/scanner/scanner_job.go:793 (DownloadOnlineUpdates) ---
Caused by: Failed to access :https://jxray.jfrog.io/api/v1/updates/onboarding?version=3.25.1 return status code : 401
2021-08-03T08:54:44.091Z [33m[jfxr ][0m [1m[31m[ERROR][0m [f1000c9d14bbcc48] [updates_job:341 ] [main ] failed to Download online updates
--- at /go/src/jfrog.com/xray/internal/jobs/updates_job.go:397 (UpdatesJob.downloadUpdateUrlsAndLastUpdateTime) ---
Caused by: failed to get online updates
--- at /go/src/jfrog.com/xray/internal/jobs/scanner/scanner_job.go:793 (DownloadOnlineUpdates) ---
Caused by: Failed to access :https://jxray.jfrog.io/api/v1/updates/onboarding?version=3.25.1 return status code : 401
2021-08-03T08:55:01.076Z [33m[jfxr ][0m [34m[INFO ][0m [ ] [samplers:327 ] [main ]
I get a similar response if I switch to offline sync and run the offline update:
~/src/artifactory ❯ jfrog xr offline-update --license-id=[redacted for posting] --version=3.25.1
[Info] Getting updates...
[Error] Response: Server response: 401 Unauthorized
I'm assuming this is a licensing problem? Any suggestions please? - I'm assuming the xray trial includes access to the vulnerability DB?
The issue seems to be related to the Xray trial license not being added correctly as 401 means license validation is failing at the JFrog end. Kindly confirm if the Xray trial license is added correctly under the UI --> Administration --> License | Xray trial license. Also, refer to this KB article for more insights.
Seems to be an issue with the license. I signed up for another trial, and used the xray license on my existing cluster. It initially failed in the same way, but shortly afterwards I was able to do an offline download (and when that worked, I tried an online sync which did work).

SAM Deployment failed Error- Waiter StackCreateComplete failed: Waiter encountered a terminal failure state

When I try to deploy package on SAM, the very first status comes in cloud formation console is ROLLBACK_IN_PROGRESS after that it gets changed to ROLLBACK_COMPLETE
I have tried deleting the stack and trying again, but every time same issue occurs.
Error in terminal looks like this-
Sourcing local options from ./SAMToolkit.devenv
SAM_PARAM_PKG environment variable not set
SAMToolkit will operate in legacy mode.
Please set SAM_PARAM_PKG in your .devenv file to run modern packaging.
Run 'sam help package' for more information
Runtime: java
Attempting to assume role from AWS Identity Broker using account 634668058279
Assumed role from AWS Identity Broker successfully.
Deploying stack sam-dev* from template: /home/***/1.0/runtime/sam/template.yml
sam-additional-artifacts-url.txt was not found, which is fine if there is no additional artifacts uploaded
Replacing BATS::SAM placeholders in template...
Uploading template build/private/tmp/sam-toolkit.yml to s3://***/sam-toolkit.yml
make_bucket failed: s3://sam-dev* An error occurred (BucketAlreadyOwnedByYou) when calling the CreateBucket operation: Your previous request to create the named bucket succeeded and you already own it.
upload: build/private/tmp/sam-toolkit.yml to s3://sam-dev*/sam-toolkit.yml
An error occurred (ValidationError) when calling the DescribeStacks operation: Stack with id sam-dev* does not exist
sam-dev* will be created.
Creating ChangeSet ChangeSet-2020-01-20T12-25-56Z
Deploying stack sam-dev*. Follow in console: https://aws-identity-broker.amazon.com/federation/634668058279/CloudFormation
ChangeSet ChangeSet-2020-01-20T12-25-56Z in sam-dev* succeeded
"StackStatus": "REVIEW_IN_PROGRESS",
sam-dev* reached REVIEW_IN_PROGRESS
Deploying stack sam-dev*. Follow in console: https://console.aws.amazon.com/cloudformation/home?region=us-west-2
Waiting for stack-create-complete
Waiter StackCreateComplete failed: Waiter encountered a terminal failure state
Command failed.
Please see the logs above.
I set SQS as event source for Lambda, but didn't provided the permissions like this
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
Resource: "*"
in lambda policies.
I found this error in "Events" tab of "CloudFormation" service.

Concourse 3.3.0 spitting hard to debug error: "json: unsupported type: map[interface {}]interface {}"

We are using some community custom resource types (https://github.com/ljfranklin/terraform-resource and https://github.com/cloudfoundry/bosh-deployment-resource). After upgrading to concourse 3.3.0, we've begun consistently seeing the following error on a few of our jobs at the same step: json: unsupported type: map[interface {}]interface {}.
This is fairly hard to debug as there is no other log output other than that. We are unsure what is incompatible between those resources and Concourse.
Notes about our pipeline:
We originally had substituted all of our usages of {{}} to (()), but reverting that did not lead to the error going away.
We upgraded concourse from v3.0.1.
The failing step can be found here: https://github.com/cloudfoundry/capi-ci/blob/6a73764d09f544820ce39f16dca166d6d6861996/ci/pipeline.yml#L731-L739
We are using a resource called elsa-aws-storage-terraform, found here: https://github.com/cloudfoundry/capi-ci/blob/6a73764d09f544820ce39f16dca166d6d6861996/ci/pipeline.yml#L731-L739
That resource is of a custom resource-type terraform found here: https://github.com/cloudfoundry/capi-ci/blob/6a73764d09f544820ce39f16dca166d6d6861996/ci/pipeline.yml#L45-L48
A similar failing step can be found here: https://github.com/cloudfoundry/capi-ci/blob/6a73764d09f544820ce39f16dca166d6d6861996/ci/pipeline.yml#L871-L886
This is related to issue of not being able to define nested maps in resource configuration https://github.com/concourse/concourse/issues/1345

Capifony SSH Exception on windows - 998 error code

I am trying to setup Capifony to deploy on windows however when running cap deploy I get the following output.
Spec
ruby 2.0.0p481
capifony v2.7.0
The error message
servers: ["homestead.app"]
** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: homestead.app (Net::SSH::Exception: Creation of file mapping failed with error: 998) connection failed for: homestead.app (Net::SSH::Exception: Creation of file mapping failed with error: 998)
If I close down pageant this issue goes away however I require pageant to load the ssh key for the github repo as it required for doing a git ls-remote locally.
Any suggestions/workarounds?
Related issues found
https://github.com/test-kitchen/test-kitchen/issues/448
Resolved my issue by using an older version of ruby (Ruby 1.9.3-p545).