I'm trying to set up a custom domain for my API Gateway and Lambda function.
I registered a domain with route53. Ex: myDomainToTestApi.net
I also created the certificates for: myDomainToTestApi.net, *.myDomainToTestApi.net, www.myDomainToTestApi.net
I installed the plugin serverless-domain-manager for serverless framerwork
In my serverless.yml I added (under custom):
customDomain:
domainName: myDomainToTestApi.net
basePath: ''
stage: ${opt:stage, 'dev'}
certificateName: '*.myDomainToTestApi.net'
createRoute53Record: true
ALL resources are in us-east-1
When I run:
sls create-domain
I receive the following error...
Serverless: Load command test
Serverless: Load command dashboard
Serverless: Invoke create_domain
Serverless: [AWS apigateway 404 0.374s 0 retries] getDomainName({ domainName: 'myDomainToTestApi.net' })
Serverless Domain Manager: NotFoundException: Invalid domain name identifier specified
Serverless: [AWS acm 200 0.35s 0 retries] listCertificates({ CertificateStatuses: [ 'PENDING_VALIDATION', 'ISSUED', 'INACTIVE', [length]: 3 ] })
Error --------------------------------------------------
Error: Error: Could not find the certificate *.myDomainToTestApi.net.
at ServerlessCustomDomain.<anonymous> (/Users/user/project/node_modules/serverless-domain-manager/dist/index.js:279:23)
If I go to the Certificate Manager view, the status for all is Issued
Anyone knows what could be happening...? Thanks.
Try to add another propertie called certificateArn, you can find certicateArn in the certificate manager detailed view of the domain
certificateArn: 'xxxxx'
It is actually
sls create_domain
(notice the underscore)
Related
I tried in incognito as wellbut same issue exists.
Currently I have added in server-deployment.yaml
args: - server - --auth-mode - sso
And in values.yaml
sso:
# #SSO configuration when SSO is specified as a server auth mode.
# #All the values are requied. SSO is activated by adding --auth-mode=sso
# #to the server command line.
#
# #The root URL of the OIDC identity provider.
issuer: http://<keycloak_ip>/auth/realms/demo
# #Name of a secret and a key in it to retrieve the app OIDC client ID from.
clientId:
name: argo
key: client-id
# #Name of a secret and a key in it to retrieve the app OIDC client secret from.
clientSecret:
name: "argo-server-sso"
key: client-secret
# # The OIDC redirect URL. Should be in the form /oauth2/callback.
redirectUrl: http:///argo/oauth2/callback
And in keycloak ui , I have created client and client credentials.
kubectl create secret generic "argo-server-sso" --from-literal=client-secret=9a9c60ba-647d-480c-b6fa-82c19caad26a
kubectl create secret generic "argo" --from-literal=client-id=argo
After hitting the argo server url,manually I need to click on login option but after that keycloak page appears and then again a popup will come "Failed to login:Unauthorized"
Server logs:
kubectl logs argo-server-5c7f8c5cbb-9fcqk
time="2021-01-20T12:06:26.876Z" level=info authModes="[sso]" baseHRef=/ managedNamespace= namespace=default secure=false
time="2021-01-20T12:06:26.877Z" level=warning msg="You are running in insecure mode. Learn how to enable transport layer security: https://argoproj.github.io/argo/tls/"
time="2021-01-20T12:06:26.877Z" level=info msg="config map" name=argo-workflow-controller-configmap
time="2021-01-20T12:06:28.318Z" level=info msg="SSO configuration" clientId="{{argo} client-id }" issuer="http://10.xx.xx.xx:xxxx/auth/realms/demo" redirectUrl="http://xx/argo/oauth2/callback"
time="2021-01-20T12:06:28.318Z" level=info msg="SSO enabled"
time="2021-01-20T12:06:28.322Z" level=info msg="Starting Argo Server" instanceID= version=v2.12.2
time="2021-01-20T12:06:28.322Z" level=info msg="Creating event controller" operationQueueSize=16 workerCount=4
time="2021-01-20T12:06:28.323Z" level=info msg="Argo Server started successfully on http://localhost:2746"
time="2021-01-20T12:07:21.990Z" level=info msg="finished unary call with code Unauthenticated" error="rpc error: code = Unauthenticated desc = token not valid for running mode" grpc.code=Unauthenticated grpc.method=GetVersion grpc.service=info.InfoService grpc.start_time="2021-01-20T12:07:21Z" grpc.time_ms=0.379 span.kind=server system=grpc
time="2021-01-20T12:07:22.009Z" level=info msg="finished unary call with code Unauthenticated" error="rpc error: code = Unauthenticated desc = token not valid for running mode" grpc.code=Unauthenticated grpc.method=ListWorkflowTemplates grpc.service=workflowtemplate.WorkflowTemplateService grpc.start_time="2021-01-20T12:07:22Z" grpc.time_ms=0.075 span.kind=server system=grpc
I integrated ArgoCD with Keycloak successfully.
You have 1 clear/visible issue: Yaml indentation is wrong
make sure you keep the right indentation as per default values in helm chart :
https://github.com/argoproj/argo-helm/blob/1aea2c41798972ff0077108f926bb9095f3f9deb/charts/argo/values.yaml#L255-L283
Accordingly, your values should be:
( assuming your argo is serving with hostname workflows.company.com )
server:
extraArgs:
- --auth-mode=sso
sso:
issuer: http://<keycloak_ip>/auth/realms/demo
clientId:
name: argo
key: client-id
clientSecret:
name: "argo-server-sso"
key: client-secret
redirectUrl: https://workflows.company.com/argo/oauth2/callback
From keycloak side now, & under your client, make sure you fill-in the Valid Redirect URL as per your ingress hostname :
Since https certificates for Cloudfront can only be created in us-east-1 and my entire stack is created in eu-west-1 I wanted to create a stack in us-east-1 that contains the ACM certificate, and then use that certificate in my stack(s) in eu-west-1.
The only problem is, how do I reference this certificate without hardcoding it, as I can't ImportValue an output in another region.
e.g.
Distribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Origins:
- DomainName: !GetAtt S3Bucket.RegionalDomainName
Id: ****
CustomOriginConfig:
HTTPPort: '80'
HTTPSPort: '443'
OriginProtocolPolicy: https-only
DefaultRootObject: 'index.html'
Enabled: true
Aliases:
- 'bla.bla.com'
DefaultCacheBehavior:
TargetOriginId: '*-origin'
AllowedMethods:
- GET
- HEAD
ViewerProtocolPolicy: redirect-to-https
CachePolicyId: '658327ea-f89d-4fab-a63d-7e88639e58f6'
ViewerCertificate:
AcmCertificateArn: !ImportValue ****
SslSupportMethod: sni-only
What do I need to put on the AcmCertificateArn line when I deploy this in eu-west-1?
As you've pointed out, you can't make cross-region export/import references between stacks in different regions. In this case, usually you would provide the Certificate ARN as an input parameter to your stack in eu-west-1.
The other options would involve the use of SSM parameters dynamic references to pass the value of the certificate's ARN. For fully automated solution, you would need to develop a custom resource in eu-west-1 in the form of a lambda function. The function would query the stack in us-east-1 for the arn in its outputs and return the arn to the stack in eu-west-1.
I'm trying to create a policy that allows for users to access a portion of the secret hierarchy based on their usernames. Rather than having a different policy for each user, I want to have one templated policy. I think this should work, but I keep getting permission denied errors. If I remove the templating and just hard-code the username in the policy path, secret retrieval works just fine, so it doesn't seem like it's any other part of the policy definition.
This is all with Vault 1.3.1, against a dev server, but the problem first came up on a non-dev server, with GCP/GCE authentication and database secrets, so it doesn't seem to be specific to any of those things, either.
Enable username/password authentication, and create a user that points to a new policy (to be defined later).
$ vault auth enable userpass
Success! Enabled userpass auth method at: userpass/
$ vault write auth/userpass/users/duvall policies=default,p2 password=duvall
Success! Data written to: auth/userpass/users/duvall
Login as this user and take a look at the token metadata.
$ vault login -method userpass username=duvall password=duvall
$ vault token lookup
Key Value
--- -----
accessor 9ga3alRqZ6E3aSCEBNFWJY1X
creation_time 1581468214
creation_ttl 768h
display_name userpass-duvall
entity_id 7513dc68-785b-d151-0efb-71315fc026dc
expire_time 2020-03-15T00:43:34.707416501Z
explicit_max_ttl 0s
id s.YZRQ3uclh2rg2H7gh3qH84P3
issue_time 2020-02-12T00:43:34.707423899Z
meta map[username:duvall]
num_uses 0
orphan true
path auth/userpass/login/duvall
policies [default p2]
renewable true
ttl 767h50m35s
type service
Create the aforementioned policy with a path templated based on the metadata key username.
$ export VAULT_TOKEN=root
$ echo 'path "secret/data/role-secrets/{{identity.entity.metadata.username}}/*" {capabilities = ["read"]}' | vault policy write p2 -
Success! Uploaded policy: p2
Create a secret that matches the path in the policy.
$ vault kv put secret/role-secrets/duvall/s1 foo=bar
Key Value
--- -----
created_time 2020-02-12T00:44:36.509412834Z
deletion_time n/a
destroyed false
version 1
As the user, reading the secret results in failure.
$ export VAULT_TOKEN=s.YZRQ3uclh2rg2H7gh3qH84P3
$ vault kv get secret/role-secrets/duvall/s1
Error making API request.
URL: GET http://127.0.0.1:8200/v1/sys/internal/ui/mounts/secret/role-secrets/duvall/s1
Code: 403. Errors:
* preflight capability check returned 403, please ensure client's policies grant access to path "secret/role-secrets/duvall/s1/"
Rewrite the policy to remove the templating.
$ export VAULT_TOKEN=root
$ echo 'path "secret/data/role-secrets/duvall/*" {capabilities = ["read"]}' | vault policy write p2 -
Success! Uploaded policy: p2
This time, reading the secret succeeds.
$ export VAULT_TOKEN=s.YZRQ3uclh2rg2H7gh3qH84P3
$ vault kv get secret/role-secrets/duvall/s1
====== Metadata ======
Key Value
--- -----
created_time 2020-02-12T00:44:36.509412834Z
deletion_time n/a
destroyed false
version 1
=== Data ===
Key Value
--- -----
foo bar
I'm not sure how relevant this is, but ... adding a metadata list capability to the policy changes the read error from a "preflight capability check" to a more normal "permission denied".
$ echo 'path "secret/metadata/*" {capabilities = ["list"]}\npath "secret/data/role-secrets/{{identity.entity.metadata.username}}/*" {capabilities = ["read"]}' | VAULT_TOKEN=root vault policy write p2 -
Success! Uploaded policy: p2
$ vault kv get secret/role-secrets/duvall/s1
Error reading secret/data/role-secrets/duvall/s1: Error making API request.
URL: GET http://127.0.0.1:8200/v1/secret/data/role-secrets/duvall/s1
Code: 403. Errors:
* 1 error occurred:
* permission denied
You are missing a point that if you want to give access of secrets/database/rdb/ then you have to give read and list capabilities for path secrets, databse, rdb.
Now if you have multiple secrets stored in secrets/ path that you don't want to share then you have to give deny for that paths.
I am connecting an On-premise S/4 HANA with SAP Cloud Platform trial account. I am using SAP Cloud SDK to fetch all Business Partners from S/4 HANA.
My Cloud Connector is set
My Destination at Sub-Account level is set and can ping to my on-premise system
My Service instances - XSUAA/Destination/Connectivity is set with the application
But I have the following error
Failed to add 'SAP-Connectivity-Authentication' header for on-premise connectivity: no JWT bearer found in the 'Authorization' header of the request. Continuing without a header. Connecting to on-premise systems may not be possible
The code which I am using is -
final List<BusinessPartner> businessPartners =
new DefaultBusinessPartnerService()
.getAllBusinessPartner()
.select(BusinessPartner.BUSINESS_PARTNER)
.execute(destination);
It seems AppRouter is the recommended for Authorization and Access and hence I tried implementing one- but my approuter shows - Not Found
Approuter App -Name - approuter-demo
Below is the xs-app.json
{
"routes": [
{
"source": "^/s4ext/(.*)",
"target": "/s4ext/$1",
"destination": "******"
}
]
}
The Manifest file is as below:
---
applications:
- name: approuter-demo
routes:
- route: approuter-demo-*****trial.cfapps.eu10.hana.ondemand.com
path: approuter
memory: 128M
env:
TENANT_HOST_PATTERN: 'approuter-demo-(.*).cfapps.eu10.hana.ondemand.com'
destinations: '[{"name":"******", "url" :"https://s4ext-***.cfapps.eu10.hana.ondemand.com", "forwardAuthToken": true }]'
services:
- xsuaa-demo
- connectivity-demo
- destination-demo
Kindly guide me. Thanks.
Your destination type might be wrong. The authorization header is set via the destination.
Try other types in sap cp -> connectivity.
Reading your question again I can identify two issues:
This error message in your log:
Failed to add 'SAP-Connectivity-Authentication' header for on-premise connectivity: no JWT bearer found in the 'Authorization' header of the request. Continuing without a header. Connecting to on-premise systems may not be possible
It may be that this error message is actually superfluous and hence indicating a problem which is actually none. In your case this header is possibly not necessary and the SAP Cloud SDK should not try to add it. But in any case, this will not influence the actual connection, so this error message is at most confusing, but not harmful in the sense of altering functionality.
Still, I am asking you to add the stack trace of this exception to your question to be very sure here.
Your app router shows "Not Found":
Here I am missing more information. When does what exactly show "Not Found"? Is it that your browser cannot find your app router, or can your app router not find the target URL of the application?
Problem
Does anyone know how to configure bootstrap.yml to tell Spring Cloud Vault to go to the correct path for k2 v2 and not try other paths first?
Details
I can successfully connect to my Vault, running k2 v2, but Spring Cloud will always try to connect to paths in the vault that don't exist, throwing a 403 on startup.
Status 403 Forbidden [secret/application]: permission denied; nested exception is org.springframework.web.client.HttpClientErrorException$Forbidden: 403 Forbidden
The above path, secret/application, doesn't exist because k2 v2 puts data in the path. For example: secret/data/application.
This isn't a show-stopper because Spring Cloud Vault does check other paths, including the correct one that has the data item in the path, but the fact a meaningless 403 is thrown during startup is like a splinter in my mind.
Ultimately, it does try the correct k2 v2 path
2019-03-18 12:22:46.611 INFO 77685 --- [ restartedMain] b.c.PropertySourceBootstrapConfiguration : Located property source: CompositePropertySource {name='vault', propertySources=[LeaseAwareVaultPropertySource {name='secret/data/my-app'}
My configuration
spring.cloud.vault:
kv:
enabled: true
backend: secret
profile-separator: '/'
default-context: my-app
application-name: my-app
host: localhost
port: 8200
scheme: http
authentication: TOKEN
token: my-crazy-long-token-string
Thanks for your help!
Add the following lines in your bootstrap.yml, this disables the generic backend
spring.cloud.vault:
generic:
enabled: false
for more information https://cloud.spring.io/spring-cloud-vault/reference/html/#vault.config.backends.generic
In addition to the accepted answer it's important to turn off (or just remove) fail-fast option:
spring.cloud.vault:
fail-fast: false
spring.cloud.vault.generic.enabled is deprecated in spring-cloud 3.0.0, but the 403 error is still there. To disable the warning (by telling spring to use the exact context), this is what I used:
spring:
config:
import: vault://
application:
name: my-application
cloud:
vault:
host: localhost
scheme: http
authentication: TOKEN
token: my-crazy-long-token-string
kv:
default-context: my-application
Other configs were set to default (such as port = 8200, backend = secret, etc.)