Configure jenkins plugin/ credentials values with api - plugins

I want to know if there is jenkins api (a remote access api) to set values in jenkins plugin configuration. For example artifactory plugin asks for artifactory URL only in configuration manager (http://jenkins-url/configure) and a new url cannot be created while creating a job.
Also how can we create new credentials (ssh/ username, password) on jenkins system with jenkins remote API.

Check out this example: https://gist.github.com/iocanel/9de5c976cc0bd5011653
import jenkins.model.*
import com.cloudbees.plugins.credentials.*
import com.cloudbees.plugins.credentials.common.*
import com.cloudbees.plugins.credentials.domains.*
import com.cloudbees.plugins.credentials.impl.*
import com.cloudbees.jenkins.plugins.sshcredentials.impl.*
import hudson.plugins.sshslaves.*;
domain = Domain.global()
store = Jenkins.instance.getExtensionList('com.cloudbees.plugins.credentials.SystemCredentialsProvider')[0].getStore()
priveteKey = new BasicSSHUserPrivateKey(
CredentialsScope.GLOBAL,
"jenkins-slave-key",
"root",
new BasicSSHUserPrivateKey.UsersPrivateKeySource(),
"",
""
)
usernameAndPassword = new UsernamePasswordCredentialsImpl(
CredentialsScope.GLOBAL,
"jenkins-slave-password", "Jenkis Slave with Password Configuration",
"root",
"jenkins"
)
store.addCredentials(domain, priveteKey)
store.addCredentials(domain, usernameAndPassword)

Related

How to download from Google Cloud Storage by Alpakka-gcs without providing secret-key?

I'm using Alpakka-gcs connecting to GCS from google-compute-engine perfectly if I provide gcs-secret-key on application.conf like the below.
alpakka.google.cloud.storage {
project-id = "project_id"
client-email = "client_email"
private-key = "************gcs-secret-key************"
base-url = "https://www.googleapis.com/" // default
base-path = "/storage/v1" // default
token-url = "https://www.googleapis.com/oauth2/v4/token" // default
token-scope = "https://www.googleapis.com/auth/devstorage.read_write" // default
}
My question is
how to connect compute-engine already having a credential without providing secret-key for alpakka.
The below code sample is working fine but I want to know alpakka way.
def downloadObject(objectName:String, destFilePath: String): Unit = {
import com.google.cloud.storage.BlobId
import com.google.cloud.storage.StorageOptions
import java.nio.file.Paths
def credential:GoogleCredentials = ComputeEngineCredentials.create()
val storage = StorageOptions.newBuilder.setCredentials(credential).setProjectId(projectId).build.getService
val blob = storage.get(BlobId.of(bucketName, objectName))
blob.downloadTo(Paths.get(destFilePath))
}
If you look into the Alpakka sources, you can see an accessToken creation. Sadly, this version only support the internal call to GoogleTokenApi, a Alpakka made version to request token to Google Cloud. And based only on the private key, not on Metadata server or GOOGLE_APPLICATION_CREDENTIALS environment variable.
You can propose a change in the the project, or even develop it and push it to the project by using the Google Cloud oauth client library.

Does SBT to support preemptive auth for downloading packages?

I am running SBT 1.2.8 and my project needs to download packages from a repo on a privately hosted Artifactory instance. My repo is protected by basic auth. After reading a multitude of examples and instructions, I created a credentials.properties file in my repo.
realm=Artifactory Realm
host=artifactory.mycompany.com
username=my_username
password=my_password
I then added the following to my build.sbt file
credentials += Credentials(new File("credentials.properties"))
Then I added the repository to my list of resolvers in resolvers.sbt
"My Company Artifactory" at "https://artifactory.mycompany.com/artifactory/my_private_repo/",
I built my application and was able to download the protected packages just fine.
However, a system administrator at my company requested I turn on the “Hide Existence of Unauthorized Resources” setting in Artifactory. This setting forces Artifactory to return 404 errors when an unauthenticated user tries to access protected resources. Usually in this case, Artifactory returns 401s with a WWW-Authenticate header.
Suddenly, my application was unable to resolve its dependencies. I turned the Artifactory setting off and then back on again and verified this setting was, in fact, the cause of my problems.
It appears as though SBT will not send credentials unless it is challenged with a 401 and a WWW-Authenticate header (with the proper realm). Looking at the docs and GitHub issues for SBT, Ivy, and Coursier, it seems this “preemptive authentication” is not a supported feature.
I spend many hours trying to resolve this in various ways, but I cannot find a solution. Here is what I have tried:
Adding my Artifactory username and password to the repository url, so it looks like https://my_username:my_password#artifactory.mycompany.com/artifactory/my_private_repo/. This worked in my browser and a REST client, but not with SBT.
Omitting the “realm” from my credentials file
Switching to SBT 1.3.9 and trying everything above with the new version.
Does anyone know how I can get SBT to use preemptive HTTP basic auth? It appears both Maven and Gradle support this (see links below), but I cannot find anything in the SBT docs.
Maven support for preemptive auth: https://jfrog.com/knowledge-base/why-does-my-maven-builds-are-failing-with-a-404-error-when-hide-existence-of-unauthorized-resources-is-enabled/
Gradle support for preemptive auth:
https://github.com/gradle/gradle/pull/386/files
I'm almost thinking of setting up a local proxy to send the proper headers Artifactory, and point SBT to use the local proxy as a resolver. However, that seems needlessly cumbersome for developers to use.
you are correct.
You can setup an AbstractRepository. See https://github.com/SupraFii/sbt-google-artifact-registry/blob/master/src/main/scala/ch/firsts/sbt/gar/ArtifactRegistryRepository.scala#L21 for example:
package ch.firsts.sbt.gar
import java.io.File
import java.util
import com.google.cloud.artifactregistry.wagon.ArtifactRegistryWagon
import org.apache.ivy.core.module.descriptor.Artifact
import org.apache.ivy.plugins.repository.AbstractRepository
import org.apache.maven.wagon.repository.Repository
class ArtifactRegistryRepository(repositoryUrl: String) extends AbstractRepository {
val repo = new Repository("google-artifact-registry", repositoryUrl)
val wagon = new ArtifactRegistryWagon()
override def getResource(source: String): ArtifactRegistryResource = {
val plainSource = stripRepository(source)
wagon.connect(repo)
ArtifactRegistryResource(repositoryUrl, plainSource, wagon.resourceExists(plainSource))
}
override def get(source: String, destination: File): Unit = {
val adjustedSource = if (destination.toString.endsWith("sha1"))
source + ".sha1"
else if (destination.toString.endsWith("md5"))
source + ".md5"
else
source
wagon.connect(repo)
wagon.get(adjustedSource, destination)
}
override def list(parent: String): util.List[String] = sys.error("Listing repository contents is not supported")
override def put(artifact: Artifact, source: File, destination: String, overwrite: Boolean): Unit = {
val plainDestination = stripRepository(destination)
wagon.connect(repo)
wagon.put(source, plainDestination)
}
private def stripRepository(fullName: String): String = fullName.substring(repositoryUrl.length + 1)
}

Cannot access https://dev.azure.com/<myOrg> Using TFS extended client version 15

We are migrating some code that used to run against an on premise TFS server but now needs to run against Azure DevOps (previously Team Services). The credentials I'm using have been validated to successfully authenticate to our DevOps organization instance, but running the following code after referencing the
Microsoft.TeamFoundationServer.ExtendedClient
NuGet package always results in TF30063: You are not authorized to access https://dev.azure.com/<myOrg> The code is posted below for authenticating via non-interactive authentication. Do I need to use a different authentication mechanism or different credentials type to get this working?
System.Net.NetworkCredential networkCredential = new System.Net.NetworkCredential(_userName, DecryptedPassword, _domain);
try
{
// Create TeamFoundationServer object
_teamFoundationCollection = new TfsTeamProjectCollection(_serverUrl, networkCredential);
_teamFoundationCollection.Authenticate();
}
catch (Exception ex)
{
// Not authorized
throw new TeamFoundationServerException(ex.Message, ex.InnerException)
}
Since you want to use .Net Client Libraries, you could refer to the following link:
https://learn.microsoft.com/en-us/azure/devops/integrate/concepts/dotnet-client-libraries?view=azure-devops
Patterns for use:
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.Client;
using Microsoft.TeamFoundation.SourceControl.WebApi;
using Microsoft.VisualStudio.Services.WebApi;
const String c_collectionUri = "https://dev.azure.com/fabrikam";
const String c_projectName = "MyGreatProject";
const String c_repoName = "MyRepo";
// Interactively ask the user for credentials, caching them so the user isn't constantly prompted
VssCredentials creds = new VssClientCredentials();
creds.Storage = new VssClientCredentialStorage();
// Connect to Azure DevOps Services
VssConnection connection = new VssConnection(new Uri(c_collectionUri), creds);
// Get a GitHttpClient to talk to the Git endpoints
GitHttpClient gitClient = connection.GetClient<GitHttpClient>();
// Get data about a specific repository
var repo = gitClient.GetRepositoryAsync(c_projectName, c_repoName).Result;

Dynamically Creating DAG based on Row available on DB Connection

I want to create a dynamically created DAG from database table query. When I'm trying to create a dynamically creating DAG from both of range of exact number or based on available object in airflow settings it's succeeded. However when I'm trying to use a PostgresHook and create a DAG for each of row of my table, I can see a new DAG generated whenever I add a new row in my table. However it turned out that I can't click the newly created DAG on my airflow web server ui. For more context I'm using Google Cloud Composer. I already following the steps mentioned in DAGs not clickable on Google Cloud Composer webserver, but working fine on a local Airflow. However it's still not working for my case.
Here's my code
from datetime import datetime, timedelta
from airflow import DAG
import psycopg2
from airflow.hooks.postgres_hook import PostgresHook
from airflow.operators.bash_operator import BashOperator
from airflow.operators.python_operator import PythonOperator
from psycopg2.extras import NamedTupleCursor
import os
default_args = {
"owner": "debug",
"depends_on_past": False,
"start_date": datetime(2018, 10, 17),
"email": ["airflow#airflow.com"],
"email_on_failure": False,
"email_on_retry": False,
"retries": 1,
"retry_delay": timedelta(minutes=5),
# 'queue': 'bash_queue',
# 'pool': 'backfill',
# 'priority_weight': 10,
# 'end_date': datetime(2016, 1, 1),
}
def create_dag(dag_id,
schedule,
default_args):
def hello_world_py(*args):
print 'Hello from DAG: {}'.format(dag_id)
dag = DAG(dag_id,
schedule_interval=timedelta(days=1),
default_args=default_args)
with dag:
t1 = PythonOperator(
task_id=dag_id,
python_callable=hello_world_py,
dag_id=dag_id)
return dag
dag = DAG("dynamic_yolo_pg_", default_args=default_args,
schedule_interval=timedelta(hours=1))
"""
Bahavior:
Create an exact DAG which in turn will create it's own file
https://www.astronomer.io/guides/dynamically-generating-dags/
"""
pg_hook = PostgresHook(postgres_conn_id='some_db')
conn = pg_hook.get_conn()
cursor = conn.cursor(cursor_factory=NamedTupleCursor)
cursor.execute("SELECT * FROM airflow_test_command;")
commands = cursor.fetchall()
for command in commands:
dag_id = command.id
schedule = timedelta(days=1)
id = "dynamic_yolo_" + str(dag_id)
print id
globals()[id] = create_dag(id,
schedule,
default_args)
Best,
This is can be solved using self-managed Airflow Webserver using steps mentioned in [1]. After you do this, if you decide to add authentication in front of your self-managed webserver, once you created the ingress, your BackendServices should appear on the Google IAP console and you can enable the IAP. In case you want to access your airflow programmatically you also can use JWT authentication using service account for your self-managed Airflow Webserver [2].
[1] https://cloud.google.com/composer/docs/how-to/managing/deploy-webserver
[2] https://cloud.google.com/iap/docs/authentication-howto

How to configure AmazonDynamoDBAsyncClient endpoint from sbt-dynamodb settings

I'm using sbt-dynamodb plugin run local AWS DynamoDB for my integration tests. To set up a AWS SDK client I need to provide a client endpoit as
val sdkClient = new AmazonDynamoDBAsyncClient(awsCreds)
sdkClient.setEndpoint("http://localhost:8000")
The plugin allows to configure the DB port number in build.sbt as
DynamoDBLocal.Keys.dynamoDBLocalPort := Some(8080)
How to access that sbt key (port number) from my test code? To do something like:
val dbPort = ??? // get from Keys.dynamoDBLocalPort somehow
sdkClient.setEndpoint("http://localhost:" + dbPort")
Thanks!