How to setup a machine as a mirror server for Yocto when fetching packages? - yocto

When building a project with Petalinux (a type of Yocto), it needs Internet for fetching packages from server (git serve or others).
My working machine does not have permission for accessing Internet (just only have LAN), so I have a plan to set up a machine in this LAN that can access to Internet to become a mirror server for Yocto.
Does anyone have any idea for setting up a server like this? Please help.

You can check the following pages to setup a source mirror:
Source download mirror
Setting up mirrors
Replicating a build offline
Basically, you launch a build on source mirror machine with those options:
SOURCE_MIRROR_URL ?= "file:///source_mirror/sources/"
INHERIT += "own-mirrors"
BB_GENERATE_MIRROR_TARBALLS = "1"
You can only fetch source with following command:
bitbake -c target runall="fetch".
Then you launch an ftp server that serves ./source_mirror/sources/ folder on http://example.com/my-source-mirror.
Then on offline machine, you set
INHERIT += "own-mirrors"
SOURCE_MIRROR_URL = "http://example.com/my-source-mirror"
BB_NO_NETWORK = "1" # or BB_FETCH_PREMIRRORONLY = "1"
If you have access to a proxy you can check those:
sources behind proxy
working behind proxy

Copy'n'paste shortcut: Below’s a working configuration you can just copy’n’paste without investing the time to understand every little detail :)
Architecture: In this example there are two types of machine. The "build server" and several instances of "developer pc“.
Machine Preparation:
Install a shared folder on all machines (server and developer) giving access to any kind of file server (e.g. nfs) mapping its storage to /mnt/mirror.
Example for NFS in case this is new for you, skip if you have NFS already mounted: https://pelux.io/2017/06/19/How-to-create-a-shared-sstate-dir.html (Stop reading at the caption „Yocto“ and proceed as below)
Overall Configuration:
Add the code I pasted to the end of this ticket to the file conf/local.conf and remove all prior lines that conflict (i.e. mess with any of the variables we defined like DL_DIR)
Machine configuration:
For the developer machines use A (outcomment B) and for the build server use B (outcomment A).
Hit it:
When the server PC bitbakes for the first time it populates the mirror folders. After the first server build finished the clients will use the mirror. (source-mirror to bypass internet dependencies and sstate-cache to speed up build speed).
local.conf:
# +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# Activate either A or B depending on it this is a developer pc or the build server
# +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
MIRROR_SERVER = "file:///mnt/mirror/"
# ########################################################
# A) Settings for developer PC operation
# ########################################################
BB_FETCH_PREMIRRORONLY = "1"
SOURCE_MIRROR_URL = "${MIRROR_SERVER}/source-mirror"
UNINATIVE_URL = "${SOURCE_MIRROR_URL}"
INHERIT += "own-mirrors"
SSTATE_MIRRORS = "\
file://.* ${MIRROR_SERVER}/sstate-cache/PATH;\
downloadfilename=PATH \n \
"
# ########################################################
# B) SETTINGS FOR BUILDSERVER OPERATION
# ########################################################
#SSTATE_DIR = "/mnt/remux/sstate-cache"
#BB_GENERATE_MIRROR_TARBALLS = "1"
##To populate the source mirror start a normal server build or run: bitbake --runall=fetch <image>
# ########################################################
# SETTINGS FOR BOTH, A and B
# ########################################################
DL_DIR = "/mnt/mirror/source-mirror“

Related

How to configure ClamAV's freshclam.conf to point to a local nexus repository?

My company has tasked me with installing clamAV on a large amount of machines running RHEL6, none of which have internet access. I know freshclam.conf can be edited to point to a local mirror of the virus database, in this section of the file:
# This option allows you to easily point freshclam to private mirrors.
# If PrivateMirror is set, freshclam does not attempt to use DNS
# to determine whether its databases are out-of-date, instead it will
# use the If-Modified-Since request or directly check the headers of the
# remote database files. For each database, freshclam first attempts
# to download the CLD file. If that fails, it tries to download the
# CVD file. This option overrides DatabaseMirror, DNSDatabaseInfo
# and ScriptedUpdates. It can be used multiple times to provide
# fall-back mirrors.
# Default: disabled
#PrivateMirror mirror1.mynetwork.com
#PrivateMirror mirror2.mynetwork.com
The company has sonatype-nexus repositories available, with which we can push the database files to at an interval of our choosing once I have access. I know I can get a link to said repository once it has been created. Do I just paste that link where mirror1.mynetwork.com currently is in its entirety, or are there additions I have to make? I'm losing my mind trying to find this simple answer and not being able to find any examples, as I have zero experience with any of this.

How do I get Salt Master to apply a basic SLS file to work against a Salt Minion?

I am programming and want to push down code with Salt. I have recently installed Salt minion and Salt master on on two CentOS 7.x servers. They are both Salt version 2015.8.7. My salt '*' test.ping worked. This, to me, proves /etc/salt/minion.yml and /etc/salt/master.yml were set up correctly on their respective servers. It proves the services are up and running.
Here are the contents of top.sls:
base:
'*':
- core
Here are the content of core.sls:
{{ salt['runtests_helpers.get_sys_temp_dir_for_path']('testfile') }};
file:
- managed
- source: salt://testfile
When I run
# salt 'fqdnOfSaltMinionServer' state.apply
I get an error like this "..No Top file or external nodes data matches found...Error: Minions returned with non-zero exit code"
How do I uninstall Salt master from the server that I want to be Salt minion? How do I get a basic .sls file to work? Ping works. I don't see what is wrong with my top.sls or core.sls files. I have a small, simple text file named testfile. I want to transfer it from the Salt master server to Salt minion. I don't see what is wrong with my set up.
are you using the yum/rpm provided salt master on centos? I was facing a similar issue and had to create a /srv/salt directory on the salt master server to hold my files (core.sls and testfile in your example) before I could get anywhere.
At least with salt 2016.11.1 (Carbon), this is the default setting (in /etc/salt/master) where the top file must reside:
##### File Server settings #####
##########################################
# Salt runs a lightweight file server written in zeromq to deliver files to
# minions. This file server is built into the master daemon and does not
# require a dedicated port.
# The file server works on environments passed to the master, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
# base:
# - /srv/salt/
# dev:
# - /srv/salt/dev/services
# - /srv/salt/dev/states
# prod:
# - /srv/salt/prod/services
# - /srv/salt/prod/states
#
#file_roots:
# base:
# - /srv/salt
#
As previous John answer, putting the top file in /srv/salt is what to do if you have not changed the default in /etc/salt/master.

Jenkins: dynamically connect slave to master without knowing node secret

I struggle to (dynamically) start the Jenkins slave agent from my dedicated slave machine (Windows 2012 R2 server).
The Jenkins master (ver. 1.617 - which I can upgrade if necessary, but not downgrade [before ver. 1.498 no credentials were required]) is on a Windows 2012 R2 server.
Security is enabled and configured via the Active Directory plugin and Project-based Matrix Authorization Strategy.
Because of the Active Directory involved, I cannot simply add a system user to authenticate with (via -jnlpCredentials username:password or -jnlpCredentials username:apitoken). As a workaround I am using my Jenkins service user for that, but I don't like it's API-Token lying around hard-coded in the script.
I am trying to use the alternative -secret secretKey, but that secretKey is randomly created when a slave node is registered on the master.
Since I am using the Azure Slave Plugin, the slave nodes and the associated virtual machines are created for me.
The virtual machines are created from a pre-defined image, that I can change in whatever way necessary.
In this pre-defined image I have a PowerShell script executed on start-up. It is derived from the sample given here. It doesn't have to be PowerShell, any other way would be okay as well.
Set-ExecutionPolicy Unrestricted
# base url to Jenkins master
$jenkinsserverurl = "https://jenkins.mycompany.com/"
# the azure-slave-plugin is creating VMs with names like 'Azure0807150842'
$vmname = (Get-Culture).TextInfo.ToTitleCase($env:computername.tolower())
# authenticate with Jenkins service user + API-token - since we don't know the '-secret'
$apiToken="jenkins_user:1234abcdefab56c7d890de1f2a345b67"
Write-Output "Downloading jenkins slave jar "
# in order to avoid updating it manually for Jenkins master updates
$slaveJarSource = $jenkinsserverurl + "jnlpJars/slave.jar"
$slaveJarLocal = "C:\jenkins_home\slave.jar"
$wc = New-Object System.Net.WebClient
$wc.DownloadFile($slaveJarSource, $slaveJarLocal)
Write-Output "Executing slave process "
$jnlpSource = $jenkinsserverurl+"computer/" + $vmname + "/slave-agent.jnlp"
# expect java.exe in the PATH, and use -noCertificateCheck to skip SSL validation
& java -jar $slaveJarLocal -jnlpCredentials $apiToken -jnlpUrl $jnlpSource -noCertificateCheck
Downloading the JNLP file and reading the contained secret is no option, since I need proper HTTP authentication at the Jenkins master for that as well.
Write-Output "Downloading jenkins slave jnlp "
$jnlpSource = $jenkinsserverurl+"computer/" + $vmname + "/slave-agent.jnlp"
$jnlpLocal = "C:\jenkins_home\slave-agent.jnlp"
$wc = New-Object System.Net.WebClient
$wc.DownloadFile($jnlpSource, $jnlpLocal)
Write-Output "Extracting secret from jenkins slave jnlp "
[xml]$jnlpFile = Get-Content $jnlpLocal
# the first argument in the generated JNLP contains the secret
$secret = Select-Xml "//jnlp/application-desc/argument[1]/text()" $jnlpFile
How can I get my hands on the generated secret (without disabling the security), or
What kind of credentials can I use instead (without using an actual user - such as my own or the Jenkins service user)?
In an ideal world, the plugin creating the slave node and its VM, would login to the created VM and execute a script similar to the one in my question - with the addition of the injected Jenkins server url, VM name, and generated secret. Since that is not the case for the current Azure Slave Plugin version, I am stuck with my workaround script - using my existing Jenkins service user.
I use this to let the plugin create a bigger/faster VM on the fly, which is only used for a daily test run and is shutdown automatically the rest of the time (and therefore causes no costs when unused).
If someone is interested, that is the setup I ended up with:
Generalized Azure VM Image (Windows 2012 R2, with JDK, Maven, Git
installed). Via NSSM, I installed the PowerShell script (which
starts the slave agent) as a Windows Service, to be executed
automatically at machine bootup (same as in the question):
Set-ExecutionPolicy Unrestricted
# base url to Jenkins master
$jenkinsserverurl = "https://jenkins.mycompany.com/"
# the azure-slave-plugin is creating VMs with names like 'Azure0807150842'
$vmname = (Get-Culture).TextInfo.ToTitleCase($env:computername.tolower())
# authenticate with Jenkins service user + API-token - since we don't know the '-secret'
$apiToken="jenkins_user:1234abcdefab56c7d890de1f2a345b67"
Write-Output "Downloading jenkins slave jar "
# in order to avoid updating it manually for Jenkins master updates
$slaveJarSource = $jenkinsserverurl + "jnlpJars/slave.jar"
$slaveJarLocal = "C:\jenkins_home\slave.jar"
$wc = New-Object System.Net.WebClient
$wc.DownloadFile($slaveJarSource, $slaveJarLocal)
Write-Output "Executing slave process "
$jnlpSource = $jenkinsserverurl+"computer/" + $vmname + "/slave-agent.jnlp"
# expect java.exe in the PATH, and use -noCertificateCheck to skip SSL validation
& java -jar $slaveJarLocal -jnlpCredentials $apiToken -jnlpUrl $jnlpSource -noCertificateCheck
Jenkins master with Azure Slave Plugin installed and configured to
use this VM image, with shutdown-on-idle after five minutes.
Jenkins Maven Project (Job) that is configured to only run on a
Azure slave node, checks out my test project from Git, and executes
the jUnit Selenium tests from there.
I've hit the same issue with the Jenkins Openstack plugin.
It seems that they will inject the secret directly in the machine's metadata.
https://github.com/jenkinsci/openstack-cloud-plugin/issues/104
In the meanwhile, I'll use instead the SSH slave, more secure since only the pub key needs to be deployed in the slave.

How to deploy with Release Management to remote datacenter

We are running TFS and Release Management on premises, and i want to deploy my applications to a remote datacenter.
Access is over the internet, so there is no windows shares available.
I am using the vNext templates, and afaik RM seems to only support unc paths over windows shares.
How can i use Release Management to deploy software to this datacenter?
Im working on this solution:
Use WebDav on a IIS located inside the datacenter.
RM server and Target can use the WebDav client built into windows and access it by an unc path.
I haven't gotten this to work yet, as RM won't use the correct credentials to logon to the webdav server.
Updated with my solution
This is only a proof of concept, and is not production tested.
Setup a WebDav site accessible from both RM server and Target server
Install the feature "Desktop experience" on both servers
Make the following DLL
using System;
using System.ComponentModel.Composition;
using System.Diagnostics;
using System.IO;
using Microsoft.TeamFoundation.Release.Common.Helpers;
using Microsoft.TeamFoundation.Release.Composition.Definitions;
using Microsoft.TeamFoundation.Release.Composition.Services;
namespace DoTheNetUse
{
[PartCreationPolicy(CreationPolicy.Shared)]
[Export(typeof(IThreadSafeService))]
public class DoTheNetUse : BaseThreadSafeService
{
public DoTheNetUse() : base("DoTheNetUse")
{}
protected override void DoAction()
{
Logger.WriteInformation("DoAction: [DoTheNetUse]");
try
{
Logger.WriteInformation("# DoTheNetUse.Start #");
Logger.WriteInformation("{0}, {1}", Environment.UserDomainName, Environment.UserName);
{
Logger.WriteInformation("Net use std");
var si = new ProcessStartInfo("cmd.exe", #"/c ""net use \\sharedwebdavserver.somewhere\DavWWWRoot\ /user:webdavuser webdavuserpassword""");
si.UseShellExecute = false;
si.RedirectStandardOutput = true;
si.RedirectStandardError = true;
var p = Process.Start(si);
p.WaitForExit();
Logger.WriteInformation("Net use output std:" + p.StandardOutput.ReadToEnd());
Logger.WriteInformation("Net use output err:" + p.StandardError.ReadToEnd());
}
//##########################################################
Logger.WriteInformation("# Done #");
}
catch (Exception e)
{
Logger.WriteError(e);
}
}
}
}
Name it "ReleaseManagementMonitor2.dll"
Place it in the a subfolder to The service "ReleaseManagementMonitor"
Configure the shared path as the solution below states.
DO NOT OVERWITE THE EXISTING "ReleaseManagementMonitor2.dll"
The reason that this works is MEF.
The ReleaseManagementMonitor service tries to load the dll "ReleaseManagementMonitor2.dll" from all subfolders.
This dll implements a service interface that RM recognises.
It the runs "net use" to apply the credentials to the session that the service runs under, and thereby grants access to the otherwise inaccessible webdav server.
This solution is certified "Works on my machine"
RM does work only with UNC, you are right on that.
You can leverage that to make your scenario work -
In Theory
Create a boundary machine on the RM domain, where your drops can be copied.
The deploy action running on your datacenter can then copy bits from this boundary machine, using credentials that have access on that domain. (These credentials are provided by you in the WPF console)
How this works
1. Have a dedicated machine on the RM server domain (say D1) that will be used as a boundary machine.
2. Define this machine as a boundary machine in RM by specifying a shared path that will be used by your data centre. Go to settings tab in your WPF console, create a new variable - { Key = RMSharedUNCPath, Value = \\BoundaryMachine\DropsLocation }. RM now understands you want to use this machine as your boundary machine.
3. Make sure you take care of these permissions
RM Server should have write permissions on the \\BoundaryMachine\DropsLocation share.
Pass down credentials of domain D1 to the target machine in the data centre (Domain D2), that can be used to access the share.
4. Credentials can be passed down fron the WPF console, you will have to define the following two config variables in the settings tab again.
Key = RMSharedUNCPathUser ; Value = domain D1 user name
Key = RMSharedUNCPathPwd ; Value = password for the user defined above.
PS - Variable names are case sensitive.
Also, to let RM know that you want to use the SharedUNC mechanism, check the corresponding checkbox for the RM server and connect to it via IP and not DNS name as these must be in different domains, i.e.
Try to use Get-Content on local-server then Set-Content on the remote server passing the file contents over;
Could package everything into an archive of some kind.
The Release Management is copying VisualStudioRemoteDeployer.exe to C:\Windows\DtlDownloads\VisualStudioRemoteDeployer folder on the target server then is copying the scripts from the specified location to target server using robocopy.
So you have to give permissions from your target server to your scripts location.
Release Management update 4 supports "Build drops stored on TFS servers"
http://blogs.msdn.com/b/visualstudioalm/archive/2014/11/11/what-s-new-in-release-management-for-vs-2013-update-4.aspx

capistrano (v3) deploys the same code on all roles

If I understand correctly the standard git deploy implementation with capistrano v3 deploys the same repository on all roles. I have a more difficult app that has several types of servers and each type has its own code base with its own repository. My database server for example does not need to deploy any code.
How do I tackle such a problem in capistrano v3?
Should I write my own deployment tasks for each of the roles?
How do I tackle such a problem in capistrano v3?
All servers get the code, as in certain environments the code is needed to perform some actions. For example in a typical setup the web server needs your static assets, the app server needs your code to serve the app, and the db server needs your code to run migrations.
If that's not true in your environment and you don't want the code on the servers in some roles, you could easily send a pull request to add the no_release feature back from Cap2 in to Cap3.
You can of course take the .rake files out of the Gem, and load those in your Capfile, which is a perfectly valid way to use the tool, and modify them for your own needs.
The general approach is that if you don't need code on your DB server, for example, why is it listed in your deployment file?
I can confirm you can use no_release: true to disable a server from deploying the repository code.
I needed to do this so I could specifically run a restart task for a different server.
Be sure to give your server a role so that you can target it. There is a handy function called release_roles() you can use to target servers that have your repository code.
Then you can separate any tasks (like my restart) to be independent from the deploy procedure.
For Example:
server '10.10.10.10', port: 22, user: 'deploy', roles: %w{web app db assets}
server '10.10.10.20', port: 22, user: 'deploy', roles: %w{frontend}, no_release: true
namespace :nginx do
desc 'Reloading PHP will clear OpCache. Remove Nginx Cache files to force regeneration.'
task :reload do
on roles(:frontend) do
execute "sudo /usr/sbin/service php7.1-fpm reload"
execute "sudo /usr/bin/find /var/run/nginx-cache -type f -delete"
end
end
end
after 'deploy:finished', 'nginx:reload'
after 'deploy:rollback', 'nginx:reload'
# Example of a task for release_roles() only
desc 'Update composer'
task :update do
on release_roles(:all) do
execute "cd #{release_path} && composer update"
end
end
before 'deploy:publishing', 'composer:update'
I can think of many scenarios where this would come in handy.
FYI, this link has more useful examples:
https://capistranorb.com/documentation/advanced-features/property-filtering/