I'm currently experimenting with ACI construction for rkt-containers. During my experiments I've built some containers especially for the use as a dependency. I now want to use these .aci images as a dependency for other images. As these files are fetched by name (for example "quay.io/alpine-sh"), I wonder if there is a way to refer to actual local .aci files.
Is there a way to import these .aci files from the local filesystem or do I have to set up a local webserver to serve as a repository?
Dependencies in acbulid (at least till version 0.3) can be defined only as http-links,
so you need to make your aci available through http to use it as dependency in acbuild.
It's not so hard to publish your aci to make it available through http. Image archive can be actually hosted on github or bitbucket.
The recent versions of acbuild seem to support it
since the relating issue (cache dependencies across acbuild invocations #144) is closed.
Cached ACI's are stored in directories depstore-tar and depstore-expanded inside $CONTEXT_ROOT/.acbuild. If we save somehow content of those directories between acbuild init,
acis won't be downloaded over and over again.
When i played with acbuild i was so annoyed that acbulid redownloads dependencies on every build.
I've written script https://bitbucket.org/legeyda/anyorigin/src/tip/acbuild-plus
which configures symbolic links inside $CONTEXT_ROOT/.acbuild to point to
persistence directories inside /var/lib/acbuild/hack. The usage is simple:
acbuild begin
acbuild-plus init target
After that all dependencies will be cached by acbuild.
You can also manually install aci-file to be available to acbuild.
This is as simple as
acbulid-plus install <your-image.aci>
I've tested the script with acbuild v0.3.0.
You can get an example of using it in the Makefile next to acbuld-plus in the repository.
Related
What are the options to download .py files into the execution environment?
In this example:
class Preprocess(dsl.ContainerOp):
def __init__(self, name, bucket, cutoff_year):
super(Preprocess, self).__init__(
name=name,
# image needs to be a compile-time string
image='gcr.io/<project>/<image-name>/cpu:v1',
command=['python3', 'run_preprocess.py'],
arguments=[
'--bucket', bucket,
'--cutoff_year', cutoff_year,
'--kfp'
],
file_outputs={'blob-path': '/blob_path.txt'}
)
run_preprocess.py file is being called from CLI.
The question is: how to get that file in there?
I have seen this interesting example: https://github.com/benjamintanweihao/kubeflow-mnist/blob/master/pipeline.py , and it clones the code before running the pipeline.
The other way would be git cloning with Dockerfile (although the image would take forever to build).
What are other options?
To kickstart KFP development using python, try the following tutorial: Data passing in python components
it clones the code before running the pipeline
The other way would be git cloning with Dockerfile (although the image would take forever to build)
Ideally, the files should be inside the container image (the Dockerfile method). This ensures maximum reproducibility.
For not very complex python scripts, the Lightweight python component feature allows you to create component from a python function. In this case the script code is store in the component command-line, so you do not need to upload the code anywhere.
Putting scripts somewhere remote (e.g. cloud storage or website) is possible, but can reduce reliability and reproducibility.
P.S.
although the image would take forever to build
It shouldn't. The first time it might be slow due to having to pull the base image, but after that it should be fast since only the new layers are being pushed. (This requires choosing a good base image that has all dependencies installed, so your Dockerfile only adds your scripts).
Does anybody know how to use workbox without getting it from the CDN? I tried this...
add workbox-cli to my dependencies:
"workbox-cli": "^3.6.3"
which gets me all of the following dependencies
$ ls node_modules | grep workbox
workbox-background-sync
workbox-broadcast-cache-update
workbox-build
workbox-cacheable-response
workbox-cache-expiration
workbox-cli
workbox-core
workbox-google-analytics
workbox-navigation-preload
workbox-precaching
workbox-range-requests
workbox-routing
workbox-strategies
workbox-streams
workbox-sw
Then I replaced this line in the examples
importScripts('https://storage.googleapis.com/workbox-cdn/releases/3.6.1/workbox-sw.js');
with this
importScripts('workbox-sw.js');
after copying node_modules/workbox-sw/build/workbox-sw.js to the public folder
But now I realise by looking at the network tab, that that file still gets all the other modules from the cdn
(I thought it would be build with everything inside it.)
Can anybody tell me if there is an npm package somewhere that already has everything inside it? Or should I copy the modules I need from the npm folder, and somehow tie them all together myself? Or do I have to use the webpack plugin? (which I guess will only bundle the modules that I use)
(Update: Workbox v5 makes the process of using a local copy of the Workbox runtime much simpler, and in most cases, it's the default.)
There's one more step that's required. The "Using Local Workbox Files Instead of CDN" has the details:
If you don’t want to use the CDN, it’s easy enough to switch to
Workbox files hosted on your own domain.
The simplest approach is to get the files via workbox-cli's
copyLibraries
command
or from a GitHub Release, and then tell workbox-sw where to find
these files via the modulePathPrefix config option.
If you put the files under /third_party/workbox/, you would use them
like so:
importScripts('/third_party/workbox/workbox-sw.js');
workbox.setConfig({modulePathPrefix: '/third_party/workbox/'});
With this, you’ll use only the local Workbox files.
I'm new to yocto. I'm trying to learn how the packages are added, how to create new layers and so on... just poking around. Started by cloning poky and playing around.
To my understanding, the bblayers.conf file is critical to the project configuration and what you end up building (what layers and packages go into your final image).
This might be the wrong assumption, but I also have a feeling that the build/ folder is where things you build (bitbake) stay. Images, lots of things needed to build them, a big cache of stuff... You can delete it and rebuild it if you somehow broke it. Or you can just copy everything without the build/ folder and continue working on a different computer.
Apparently it's not quite the case. The build/conf/ folder has the important .conf files like the bblayers.conf.
Can someone explain why is this the case? Is there an elegant way to separate the project config and the build folder?
There are a couple layers to the Yocto Project, mainly:
-BSPDIR: TOPDIR (build),sources,setup-environment
-BSPDIR/setup-environment: initial all the variable to for bitbake;
-BSPDIR/sources: meta-data/
-TOPDIR: conf/ sstate-cache/ cache/ tmp/ downloads/
-TOPDIR/downloads: recipe fetched packages;
-TOPDIR/conf/ : stored all the configuration. Mainly bblayers.conf, local.conf, sanity_info;
-TOPDIR/conf/bblayers.conf: stored all the path to meta-data that will be loaded;
-TOPDIR/conf/local.conf: configuration to build
-TOPDIR/conf/sanity_info: path double check to make sure that all the path used in the last compile match the current compile;
-TOPDIR/tmp/: Where all the compiling and building work happen
In BSPDIR/sources/poky/meta/conf/bitbake.conf
sources/poky/meta/conf/bitbake.conf:TMPDIR ?= "${TOPDIR}/tmp"
sources/poky/meta/conf/bitbake.conf:PERSISTENT_DIR = "${TOPDIR}/cache"
sources/poky/meta/conf/bitbake.conf:DL_DIR ?= "${TOPDIR}/downloads"
sources/poky/meta/conf/bitbake.conf:SSTATE_DIR ?= "${TOPDIR}/sstate-cache"
TOPDIR is where you initialize when run setup-environment or oe-init-build-env; All the other bitbake configuration environment variable can be changed based on your need in conf/local.conf;
e.g. modify conf/local.conf to change the downloads directory from TOPDIR/downloads;
DL_DIR ?= "/home/downloads/"
To create new layer, please watch this video: https://www.youtube.com/watch?v=3HsaoVqX7dg
You might have followed the Yocto Project Quick Start Guide.
The earliest step in yocto after installing (cloning git repositories and installing packages) is to create your OE (OpenEmbedded) environment, which is done via:
source oe-init-build-env
This automatically creates and leads you to the build folder.
Matter that you can give any directory of your system as parameter for this call (Reference Manual - Build Overview):
source oe-init-build-env [build_dir]
⤑ This is also the step, where your 'project config' is separated from the actual build folder.
⤑ As you assumed, in practice you would at most copy the layers and not the build folder. Even better is to leave sources from others in their git repositories and only copy and maintain your own layers.
it is true an issue in the modern Yocto build system.
file bblayers.conf has to be synthesized based on MACHINE and DISTRO information using all provided (usually with the help of repo manifest file) layers by: collecting data from each available layer file layer.conf as well as conf/machine, conf/distro, images.
Instead bblayers.conf is usually copied over from the base layer conf/bblayers.conf location with the help of setup-environment script.
this approach provides no "one click" buildable environment but require maintainer/developer to look into readme to identify what layers are missing to be added to the build/conf/bblayers.conf.
I am using docker for continuous integration of a Scala project. Inside the container I am building the project and creating a distribution with "sbt dist".
This takes ages pulling down all the dependencies and I would like to use a docker data volume as mentioned here: http://docs.docker.io/en/latest/use/working_with_volumes/
However, I don't understand how I could get SBT to put the jar files in the volume, or how SBT would know how to read them from that volume.
SBT uses ivy to resolve project dependencies. Ivy caches downloaded artifacts locally and every time it is asked to pull something, it first goes to that cache and if nothing found downloads from remote. By default cache is located in ~/.ivy2, but it is actually a configurable property. So just mount volume, point ivy to it (or mount it in a way it will be on default location) and enjoy the caches.
Not sure if this makes sense on an integration server, but when developing on localhost, I'm mapping my host's .ivy2/ and .sbt/ directories to volumes in the container, like so:
docker run ... -v ~/.ivy2:/root/.ivy2 -v ~/.sbt:/root/.sbt ...
(Apparently, inside the container, .ivy2/ and .sbt/ are placed in /root/, since we're logging in to the container as the root user.)
My team is developing a new DotNetNuke web application and would like to know what is recommended to setup a development environment with source control and automated builds? We would like to keep the DNN source code separate from our custom modules and extensions source code.
The DotNetNuke Compiled Module template for Visual Studio wants us to store the source code in the DesktopModules directory of the DNN source code and output to the DNN source code bin directory. Is this the recommended structure? I would rather keep the files in different locations, but then it becomes more difficult to run and debug locally as it would require an install of the module for each change. Also, how should an automated build deploy any changes?
How have others set this up? Is there a recommended best practice?
For my source control, I develop modules in their own project. This contains the module code, test code, data provider code (if applicable) and anything else. This is checked into source control like any other project. Note that the module project contains no links to a specific DNN website, and DNN references are made in the project to a common "bin" directory that references your target build. For example, in my projects folder, I have \bin460 , \bin480, \bin510, \bin520 etc. Each of these folders holds a set of binaries for a specific DNN version. That way you can build against a particular version but test against any version you like.
The problem with source-controlling a module in place in a dnn install is
- sometimes not all of the module code is easily isolated under a single parent directory
- doesn't lend well to a PA module approach
- not easy to shift the project to a different DNN Version for development or testing
- easy to inadvertently source control parts of the DNN solution, particularly with integrated VS source control solutions.
This approach compiles quickly because you're not trying to compile the entire project. For test deployment I have a build script that copies the various parts of the module into a target website. This can be done via the compile (link the build script) or just run after you've had a successful compile in a cmd window. My build script has a 'target' environment switch, so that I can say 'dnn520' to deploy the build to my test dnn520 install. Note that you need to manually create the module configuration first before this will work, but this is a one-time effort, and you can use the export feature to create your .dnn module manifest.
To build your module package, invest the time in a comprehensive script which will take the various parts from your source directory, and zip them into an install package. Keep all of the parts in your source control folder, and copy them into a temp directory, then run a command-line zip utility (I use an ancient version of pkzip) to pack it into an installable file.
The benefits of this approach is :
- separation of module code from installed code
- simple way of keeping only the module code in source control (don't have to exclude all the website code)
- ability to quickly test out modules in different dnn versions
- packaging script allows you to quickly and easily build a new version of a module for install testing/deployment
The drawbacks are
- can't use the magic green 'go' button in VS (have to manually attach debugger)
- more setup time than developing in-place
We typically stick to keeping the module code in a folder under DesktopModules and building to the website's bin directory.
In source control, we just map the individual modules, rather than the entire website. Depending on what we're working on, a module may be an entire project in source control, or we may have multiple related modules in the same project, living next to each other.
Automatically deploying changes is somewhat difficult in DNN. It's highly recommended to have a build script that packages your module into an installable form. You can then copy installable packages into the website's Install/Module folder, and get the URL /Install/Install.aspx?mode=InstallResources, which will install any packages in that folder.
In response to bduke's answer. You should, and don't want to build projects in the DesktopModules folder.
That's where all of the source code for the site out of the box goes.
That's where you modules will be "installed" and thus if someone "updates" or re-installs one, then it will be overwritten
It can make upgrading your Application far more difficult. Many developers don't understand that the idea of not touching the original source code files to modify their behavior. BECAUSE it will just be overwritten when you perform an upgrade.
If you want to build modules, create a solution folder called Modules and place your seperate project modules there.
If you want to debug them, make the target debug output point to the web\bin folder.
If you want to install/deploy them. Build it in release mode and install them through the Module/Extension filter.