Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have the following skaffold.yaml config file:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifest:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: hamza9899/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
But when I run skaffold dev I get the following error :
line 16: field manifest not found in type v2alpha3.KubectlDeploy```
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed last month.
Improve this question
We are having AzureCR as our container registry and our Azure Devops build pipelines having docker image build and push tasks to create various application specific custom images over the dockerhubs base images.
We need to have all these custom images and the dockerhub- base public images scanned using the Jfrog Xray before the custom images pushed to the ACR and other deployment taks.
How the Jfrog xray tool can be integrated with Azure Pipeline yaml file to scan the newly built custom images just after the maven build & docker image build tasks and before the image push to ACR .
Is there any way to integrate Azure Devops and jfrog Xray together to scan these custom images as part of Azure Pipeline build just before the push to ACR ?
Tried pipeline
parameters:
imageName: ''
includeLatestTag: false
buildContext: '$(System.DefaultWorkingDirectory)/release/target/docker'
publishDocker: ''
steps:
- task: Docker#1
inputs:
azureSubscriptionEndpoint: 'mysub'
azureContainerRegistry: $(containerRegistry)
command: build
includeLatestTag: ${{ parameters.includeLatestTag }}
dockerFile: '${{ parameters.buildContext }}/Dockerfile'
useDefaultContext: false
buildContext: ${{ parameters.buildContext }}
imageName: ${{ parameters.imageName }}
arguments: $(buildArgs)
name: Build_Docker_Image
displayName: 'Build Docker image'
- task: JFrogDocker#1
inputs:
command: 'Scan'
xrayConnection: 'jfrog xray token'
watchesSource: 'none'
licenses: true
allowFailBuild: true
threads: '3'
skipLogin: false
- task: Docker#1
inputs:
azureSubscriptionEndpoint: 'mysub'
azureContainerRegistry: $(containerRegistry)
command: push
includeLatestTag: ${{ parameters.includeLatestTag }}
dockerFile: '${{ parameters.buildContext }}/Dockerfile'
useDefaultContext: false
buildContext: ${{ parameters.buildContext }}
imageName: ${{ parameters.imageName }}
name: Push_Docker_Image
displayName: 'Push Docker image'
I tried to add the below task in between Dicker image build and push tasks . But not getting any option scan them . Any guidance?
The new JFrog extension, JFrog Azure DevOps Extension, has the JFrog Docker task that allows scanning local docker images (as well as pulling and pushing them from/to Artifactory).
By adding the Xray scan task following the instructions here, we can have the build task wait for the Xray scan to complete. However, it is necessary for the build to publish the build information first to the Artifactory in order to have the Xray processing initiated.
So, my proposal here is to have the build promotion enabled against the target repository to push the images, when the build scan stage is completed.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I have a requirement for triggering an agent in a stage when the build is coming from pipeline CI "A 1".
When I put a condition eq(variables['Release.Artifacts.{A 1}.SourceBranch'], 'refs/heads/main') in the Agent pool and deployed it, I got the following error.
What is the exact conditional expression that should be used so that I will be getting the source branch name?
Note that, I don't want to change my CI name.
Based on my test, the condition (eq(variables['Release.Artifacts.{Alias}.SourceBranch'], 'refs/heads/main')) can work as expected.
You need to check if the variable name is correct.
The {A 1} in your Release Pipeline variable is the Release Artifacts Alias instead of the Build Pipeline name.
You can check the Alias name in Release Artifacts -> Source alias
You can directly use the Alias in your variable or you can modify it to use the new name.
For example: Release.Artifacts.test.SourceBranch
On the other hand, since you are using build artifacts, you can also use the variable : BUILD.SOURCEBRANCH
For example:
eq(variables['BUILD.SOURCEBRANCH'], 'refs/heads/main')
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
It seems my rundeck can't do https. I'm doing a ssl offload at a loadbalancer. The following is a snippet of my deployment yml
containers:
- name: rundeck
image:rundeck/rundeck:3.1.1
env:
- name: RUNDECK_GRAILS_URL
value: "https://rundeck.somehost.io"
- name: SERVER_SECURED_URL
value: "https://rundeck.somehost.io"
- name: RUNDECK_JVM_SETTINGS
value: "-Dserver.web.context=/rundeck -Drundeck.jetty.connector.forwarded=true"
I've follow most tips form the net but my rundeck still looking for http after login
You need to enable the ssl settings, for example:
args: ["-Dserver.https.port=4443 -Drundeck.ssl.config=/home/rundeck/server/config/ssl.properties"]
But you will need to add a certificate (for example a self-certificate) to the container.
You can try:
1) extend the Rundeck official image (like this )
2) create a volume with the certificate and mount it on /home/rundeck/etc/truststore (also you might need to mount the /home/rundeck/server/config/ssl.properties with the right password ). BTW, I haven't tried that
You need to define -Drundeck.ssl.config parameter and SSL port (-Dserver.https.port=4443) too in your Rundeck section (the example has HAproxy and MySQL as part of the container but you can use the Rundeck section).
This parameter point to a file with this content (with your paths and certificate, you've full SSL configuration explanation here)
keystore=/etc/rundeck/ssl/keystore
keystore.password=password
key.password=password
truststore=/etc/rundeck/ssl/truststore
truststore.password=password
You can check the entire example project here.
Alternatively, you can use this image maybe easiest to configure (check the "SSL" parameters).
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I am writing a CFT for a website hosted on S3 - The YML file passes template-validate with no issues however the build agent returns the following error:
yaml.constructor.ConstructorError: could not determine a constructor for the tag '!GetAtt'
Outputs:
WebsiteURL:
Value: !GetAtt RootBucket.WebsiteURL
Description: URL for website hosted on S3
Try it without the shorthand version of Fn::GetAtt
Outputs:
WebsiteURL:
Value: Fn::GetAtt: [ RootBucket, WebsiteURL ]
Description: URL for website hosted on S3
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have configured the ejabberd sever but i am not be able to access http://www.example.com:5280/crossdomain.xml
i have set the following parameters in ejabberd.cfg
Listners
{5280, ejabberd_http, [
{access,all},
{request_handlers,
[
{["pub", "archive"], mod_http_fileserver},
{["xmpp-http-bind"], mod_http_bind}
]},
%% captcha,
http_bind,
http_poll,
register,
web_admin
]}
Modules
{mod_http_fileserver, [
{docroot, "/var/log/ejabberd/"},
{accesslog, "/var/log/ejabberd/access.log"},
{content_types,[{".xml, text/xml"}]}
]},
crossdomain.xml is present at this path in centos "/var/log/ejabberd/"
can anyone help in resolving this issues , i heard that for crossdomain.xml we can also configure apache webserver , but i don't know how to do that ?
I guess you are using Strophe with ejabberd. The crossdomain.xml has nothing to do with ejabberd, it has to do with configuring flash to do cross domain requests.
Of course you don't need flash and it's better to avoid that altogether by means of using a proxy in front. You can use apache or nginx or any other.
Here is a tutorial for nginx.