I would love to use Jenkins job-dsl pipelineJob for creating a build job for a GitHub repository with a static (and centrally maintained) pipeline.
But looking at the documentation (https://jenkinsci.github.io/job-dsl-plugin/#path/javaposse.jobdsl.dsl.DslFactory.pipelineJob-definition) I can either create a cps with a static script or cpsScm with a SCM and a reference to the Jenkinsfile in the repository.
The requirement for having SCM defined comes from the gitParameter plugin, which I want to use for picking a git revision.
Is there a way how I can use a static script for the pipeline together with the SCM?
Update:
This is concretely what I would like to do:
defining a pipeline job
using a git Parameter to select the revision
declare the particular script inline
pipelineJob("test") {
parameters {
gitParameter {
name('revision')
type('PT_BRANCH_TAG')
defaultValue('origin/master')
selectedValue('DEFAULT')
description('')
branch('')
branchFilter('')
tagFilter('')
useRepository('')
quickFilterEnabled(true)
}
}
logRotator {
numToKeep(50)
}
definition {
cpsScm {
scm {
git {
remote {
github("<my-repo>")
credentials('github')
}
branch('$revision')
}
}
script("""
#Library(value='pipeline-lib#master', changelog=false) _
myPipeline projectName: 'test-name'
""")
}
}
}
Pipeline as code has incorporated many functions of jobs-dsl, which is probably why some of the fine tuning capabilities are missing from jobs-dsl's pipelineJob. The examples given on the the Git Parameter plugin page are actually in the pipelines, which could be be the script portion of the cps portion of the jobs-dsl definition. The code quoted in the question could be converted to:
pipelineJob("test") {
logRotator {
numToKeep(50)
}
definition {
cps {
script('''
pipeline {
agent any
libraries {
lib("pipeline-lib#master")
}
parameters {
gitParameter name: 'revision',
type: 'PT_BRANCH_TAG',
defaultValue: 'origin/master',
selectedValue: 'DEFAULT',
description: '',
branch: '',
branchFilter: '',
tagFilter: '',
useRepository: '',
quickFilterEnabled: true
}
stages {
stage('Build') {
steps {
git branch: "${revision}",
url: <myrepo>,
credentialsId: 'github'
// not familiar with what the next line does, assuming it's a pipeline step
myPipeline projectName: 'test-name'
}
}
}
''')
}
}
}
This is for a declarative pipeline. There is also an example of scripted pipeline on the git parameter plugin page.
Related
I deployed a gatsby website with the Github pages and I'm having errors like that:
Locally everything works perfectly, errors occur only on the server.
Seems like the server can not resolve paths correctly.
I'm adding unnecessary repository name after domain. How to remove that?
I tried changing some host options and deploying app again and once it worked properly, IDK why, then I made another deploy and it crushed again.
My gatsby.config:
const path = require("path");
const { title, keywords, description, author, defaultLang, trackingId } = require("./config/site");
module.exports = {
pathPrefix: "/lbearthworks",
siteMetadata: {
title,
keywords,
description,
author,
},
plugins: [
{
resolve: "gatsby-plugin-google-analytics",
options: {
trackingId,
},
},
{
resolve: "gatsby-plugin-manifest",
options: {
name: title,
short_name: "Lb",
start_url: "/",
background_color: "#212121",
theme_color: "#fed136",
display: "minimal-ui",
icon: "content/assets/gatsby-icon.png",
},
},
"gatsby-transformer-remark",
{
resolve: "gatsby-source-filesystem",
options: {
name: "markdown",
path: `${__dirname}/content`,
},
},
{
resolve: "gatsby-source-filesystem",
options: {
name: "images",
path: `${__dirname}/content/assets/images`,
},
},
"gatsby-plugin-eslint",
"gatsby-plugin-react-helmet",
"gatsby-transformer-sharp",
"gatsby-plugin-sharp",
"gatsby-plugin-offline",
{
resolve: "gatsby-plugin-sass",
options: {
data: `#import "core.scss";`,
includePaths: [path.resolve(__dirname, "src/style")],
},
},
...
],
};
Live version
Github Reository
When dealing with GitHub Pages you need to add an extra configuration to your build command, since you are adding a pathPrefix variable, you need to allow Netlify how to prefix those paths. Ideally, the build command should look like:
"deploy": "gatsby build --prefix-paths && gh-pages -d public"
In your case, because you are adding a file-based configuration (netlify.toml), your build command is:
[build]
command = "yarn && yarn testbuild"
publish = "public"
Note that testbuild is yarn test && yarn build, according to your repository.
So, one workaround is changing your package.json command to:
"testbuild": "yarn test && yarn build --prefix-paths && gh-pages -d public",
In addition, you should be in gh-pages branch as it shows the Gatsby's documentation:
When you run npm run deploy all contents of the public folder will be
moved to your repository’s gh-pages branch. Make sure that your
repository’s settings has the gh-pages branch set as the source to
deploy from.
Let's say that we have the Github package registry repository https://maven.pkg.github.com/someOrganization . How can I cat the list of all packages in this repo into a txt file ?
This can be done using Preview API for GitHub Packages. You can query it in GraphQL using:
query($login: String!) {
organization(login:$login) {
registryPackages(first:10, packageType:MAVEN) {
nodes {
name
}
}
}
}
This will output something like:
{
"data": {
"organization": {
"registryPackages": {
"nodes": [
{
"name": "package1"
},
{
"name": "package2"
}
]
}
}
}
}
At the time of writing this requires both:
Valid Token with org:read and packages:read
Accept header for preview API: application/vnd.github.packages-preview+json
Now because you want to do this over the command line, you could curling it. There's already a good answer on how to use curl to access GitHub's GraphQL API: https://stackoverflow.com/a/42021388/1174076
Hope this helps.
I created a Gatsby blog using the Netlify one-click button but wish to have my own home landing page using index.html and then then the Gatsby blog be built in the /blog directory of my site (example.com/blog)
I have looked into the config.js and gatsby-config.js files for settings to change the build location plus I have also tried a few different build commands in Netlify such as
Build command : gatsby build
Publish directory: public/articles
Can anyone help build this in a specific folder(directory) whilst leaving my own index.html in the root directory?
Have a look at this starter and have a read of Gatsby tutorial Part 7
gatsby-node.js
const replacePath = path => (path === `/` ? path : path.replace(/\/$/, ``))
const { createFilePath } = require(`gatsby-source-filesystem`)
const path = require("path")
exports.onCreateNode = ({ node, getNode, actions }) => {
const { createNodeField } = actions
if (node.internal.type === `MarkdownRemark`) {
const slug = createFilePath({ node, getNode, basePath: `blog` })
createNodeField({
node,
name: `slug`,
value: replacePath(slug),
})
}
}
exports.createPages = ({ actions, graphql }) => {
const { createPage } = actions
const postTemplate = path.resolve(`src/templates/postTemplate.js`)
return graphql(`
{
allMarkdownRemark(
sort: { order: DESC, fields: [frontmatter___date] }
limit: 1000
) {
edges {
node {
fields {
slug
}
}
}
}
}
`).then(result => {
if (result.errors) {
return Promise.reject(result.errors)
}
result.data.allMarkdownRemark.edges.forEach(({ node }) => {
createPage({
path: replacePath(node.fields.slug),
component: postTemplate
})
})
})
}
Here in onCreateNode, if the node's internal type is MarkdownRemark, a filepath is created with a base path of blog, and that new filepath is added to a new node field called slug.
This new field is now available in any graphQL queries.
So later in createPages, the new slug field is queried and used in the createPage path option.
So pages in your src/blog folder will remain to be served from the root, while posts generated by MarkdownRemark will be served from /blog/
In gatsby-config.js add this
module.exports = {
pathPrefix: `/blog`,
and while you building your app:
gatsby build --prefix-paths
You’ll need to tell Gatsby where you want the file. Netlify just wants to know where your public folder is.
gatsby build --output-dir public/articles
You can either then move your own index.html file into the directory created (public), or have it already there*.
I would also recommend looking at letting Gatsby run your whole site, and create a static file for your homepage, then your build process is much simplier, and you can run it locally.
* Not sure if that is allowed, Gatsby may require an empty or non-exisiting folder to build into.
I am using following step in my pipeline jenkins job:
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: 'my#xyz.com', sendToIndividuals: true])
But no email was sent when the build failed (i.e. error). Any pointers why?
P.S. Emails can be sent from this server, I have tested that.
Use Declarative Pipelines using new syntax, for example:
pipeline {
agent any
stages {
stage('Test') {
steps {
sh 'echo "Fail!"; exit 1'
}
}
}
post {
always {
echo 'This will always run'
}
success {
echo 'This will run only if successful'
}
failure {
mail bcc: '', body: "<b>Example</b><br>\n\<br>Project: ${env.JOB_NAME} <br>Build Number: ${env.BUILD_NUMBER} <br> URL de build: ${env.BUILD_URL}", cc: '', charset: 'UTF-8', from: '', mimeType: 'text/html', replyTo: '', subject: "ERROR CI: Project name -> ${env.JOB_NAME}", to: "foo#foomail.com";
}
unstable {
echo 'This will run only if the run was marked as unstable'
}
changed {
echo 'This will run only if the state of the Pipeline has changed'
echo 'For example, if the Pipeline was previously failing but is now successful'
}
}
}
You can find more information in the Official Jenkins Site:
https://jenkins.io/doc/pipeline/tour/running-multiple-steps/
Note that this new syntax make your pipelines more readable, logic and maintainable.
You need to manually set the build result to failure, and also make sure it runs in a workspace. For example:
try {
throw new Exception('fail!')
} catch (all) {
currentBuild.result = "FAILURE"
} finally {
node('master') {
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: 'my#xyz.com', sendToIndividuals: true])
}
}
The plugin is checking currentBuild.result for the status, and this isn't normally changed until after the script completes.
I want to use grunt for deployment and therefore want to read in configuration of remote hosts based on the already existing ~/.ssh/config file.
To load that configuration I'm using sshconf but need to include the grunt.initConfig() call in the callback to have the configuration when defining environments.
var sshconf = require('sshconf');
module.exports = function(grunt) {
// Read in ssh configuration
sshconf.read(function(err, sshHosts) {
if (err)
console.log(err);
// SSH config loaded, now init grunt
grunt.initConfig({
sshconfig: {
staging: {
privateKey: grunt.file.read(sshHosts['project_staging'].properties.IdentityFile),
host: sshHosts['project_staging'].properties.HostName,
username: sshHosts['project_staging'].properties.User,
port: sshHosts['project_staging'].properties.Port || 22,
path: "/var/www/project"
},
production: {
// ...
}
},
// Tasks to be executed on remote server
sshexec: {
example_task: {
command: 'uptime && hostname'
}
},
sftp: {
deploy: {
files: {
"./": ["*.json", "*.js", "config/**", "controllers/**", "lib/**", "models/**", "public/**", "views/**"]
},
options: {
//srcBasePath: "test/",
createDirectories: true
}
}
}
// More tasks
// ...
});
grunt.loadNpmTasks('grunt-ssh');
// More plugins ...
});
};
When I call grunt --help it states:
> grunt --help
Grunt: The JavaScript Task Runner (v0.4.1)
…
Available tasks
(no tasks found)
If I do not wrap the grunt initiation in that callback (sshconf.read(function(err, sshHosts) {})) everything is working fine (except for the ssh config not loaded or not yet ready to be used).
Is what I am trying even possible and if so, how? Am I missing something obvious?
Grunt init cannot be used in an async fashion like this. Either read the sshconf synchronously, or use a task, as described in this answer: How can I perform an asynchronous operation before grunt.initConfig()?