What are the options to download .py files into the execution environment?
In this example:
class Preprocess(dsl.ContainerOp):
def __init__(self, name, bucket, cutoff_year):
super(Preprocess, self).__init__(
name=name,
# image needs to be a compile-time string
image='gcr.io/<project>/<image-name>/cpu:v1',
command=['python3', 'run_preprocess.py'],
arguments=[
'--bucket', bucket,
'--cutoff_year', cutoff_year,
'--kfp'
],
file_outputs={'blob-path': '/blob_path.txt'}
)
run_preprocess.py file is being called from CLI.
The question is: how to get that file in there?
I have seen this interesting example: https://github.com/benjamintanweihao/kubeflow-mnist/blob/master/pipeline.py , and it clones the code before running the pipeline.
The other way would be git cloning with Dockerfile (although the image would take forever to build).
What are other options?
To kickstart KFP development using python, try the following tutorial: Data passing in python components
it clones the code before running the pipeline
The other way would be git cloning with Dockerfile (although the image would take forever to build)
Ideally, the files should be inside the container image (the Dockerfile method). This ensures maximum reproducibility.
For not very complex python scripts, the Lightweight python component feature allows you to create component from a python function. In this case the script code is store in the component command-line, so you do not need to upload the code anywhere.
Putting scripts somewhere remote (e.g. cloud storage or website) is possible, but can reduce reliability and reproducibility.
P.S.
although the image would take forever to build
It shouldn't. The first time it might be slow due to having to pull the base image, but after that it should be fast since only the new layers are being pushed. (This requires choosing a good base image that has all dependencies installed, so your Dockerfile only adds your scripts).
Related
I have an .osm.pbf file which I want to use to generate vector tiles with (.mbtiles).
Im currently on a windows machine utilising docker, I have tried to use the tool tilemaker (https://github.com/systemed/tilemaker) though I cannot get it to work on my files and get issues like so
"
terminate called after throwing an instance of 'std::runtime_error'
what(): Exception during zlib decompression: (-5)
"
I was just wondering if anyone else was able to generate these tiles from said file type, if so could you provide a low level detailed guide on how you did so, as I am new to vector tiles and am getting confused within some circumstances.
For anyone interested I use this code to run the docker:
docker run tilemaker tilemaker --input=sud-latest.osm.pbf --output=sud.mbtiles
I have to put tilemaker twice as otherwise it says it cannot open the .osm.pbf otherwise
I made a tutorial on how to generate tiles using maptiler:
https://blog.kleunen.nl/blog/tilemaker-generate-map
It is focused on linux, but you can run it on windows as well. You can find a pre-built version of maptiler on the CI:
https://github.com/systemed/tilemaker/pull/208/checks?check_run_id=2143761163
Probably soon they will also become available on the github page.
Once you have the prebuilt executable and the resources (config and process lua), you can simply do:
tilemaker.exe --input=sud-latest.osm.pbf --output=sud.mbtiles --process resources/process-openmaptiles.lua --config resources/config-openmaptiles.json
The output works best from zoom level 8 - 14, borders are still missing, so lower zoom levels look pretty empty.
You can use ogr2ogr (see other answer here) to translate osm.pbf into geojson, and then Mapbox's tippecanoe tool to convert the geojson to mbtiles.
possible solutions :
1.Might be RAM issue try to run small size osm.pbf file with tilemaker
2.Run tilemaker.exe from executable file (by making build from github tilemaker clone) ---> it may solve most of issues
When building the same source code for B&R PLC's in different paths on your PC it wants to restart the PLC, since the programs are laid out differently on in the new build. This is also an issue when building the same source on another PC after fx pulling down code from a repository.
Is there a way to configure Automation studio, or connect to the running plc and get the binaries from the PLC and not having to restart it?
The build and transfer with AS has several stages. At some point binaries are created, which in turn are then transformed to data objects (*.br files). The latter has a CRC and some encryption (I believe). So every task will end up being a data object (sometimes called module).
The data objects are what is actually transferred to the PLC. With the Runtime Utility Center (RUC) you can in theory download the data objects from the PLC, but this will not help you for your issue.
If you want to avoid a warmstart for simple changes you need to have the binaries and data objects in your project directory. Notably the Temp and Binaries folders. Otherwise AS will consider your next build a rebuild which requires a warmstart after transfer.
If you have a buildchain together with your repository you might consider storing the Binaries etc. as artifacts. I know of some companies doing exactly this.
The option which I have used in the past is to utilize the RUC to transfer only the programs you have modified. First build your project after modifying it. The open the RUC and select Create, modify and execute projects. Here you can basically do some scripting. In the toolbox you can find Module Functions which allows you to download data objects to the PLC after establishing a connection. Just select the task you want to transfer in the binaries folder of your project.
It might also be possible to modify the Transfer.lst, also located in the Binaries, but I haven't tried this myself.
I hope this helps.
I'm currently experimenting with ACI construction for rkt-containers. During my experiments I've built some containers especially for the use as a dependency. I now want to use these .aci images as a dependency for other images. As these files are fetched by name (for example "quay.io/alpine-sh"), I wonder if there is a way to refer to actual local .aci files.
Is there a way to import these .aci files from the local filesystem or do I have to set up a local webserver to serve as a repository?
Dependencies in acbulid (at least till version 0.3) can be defined only as http-links,
so you need to make your aci available through http to use it as dependency in acbuild.
It's not so hard to publish your aci to make it available through http. Image archive can be actually hosted on github or bitbucket.
The recent versions of acbuild seem to support it
since the relating issue (cache dependencies across acbuild invocations #144) is closed.
Cached ACI's are stored in directories depstore-tar and depstore-expanded inside $CONTEXT_ROOT/.acbuild. If we save somehow content of those directories between acbuild init,
acis won't be downloaded over and over again.
When i played with acbuild i was so annoyed that acbulid redownloads dependencies on every build.
I've written script https://bitbucket.org/legeyda/anyorigin/src/tip/acbuild-plus
which configures symbolic links inside $CONTEXT_ROOT/.acbuild to point to
persistence directories inside /var/lib/acbuild/hack. The usage is simple:
acbuild begin
acbuild-plus init target
After that all dependencies will be cached by acbuild.
You can also manually install aci-file to be available to acbuild.
This is as simple as
acbulid-plus install <your-image.aci>
I've tested the script with acbuild v0.3.0.
You can get an example of using it in the Makefile next to acbuld-plus in the repository.
My application is built on Play Framework (2.5.3) using Scala. In order to deploy the app, I create a docker image using sbt docker:publishLocal command. I am trying to figure out where the base docker image file is in the play framework folder structure. I do see a DockerFile in target/docker folder. I don't know how Play Framework creates this DockerFile and where / how Play Framework tells docker to layer the application on the base image. I am scala/play/docker n00b. Please help.
Here are some definitions in order to get an idea of what im talking about.
Docker containers are stored applications or services that have been configured and stored so that docker can run it on any machine docker is installed on.
Docker Images are save states of docker container instances which are used to run new containers.
Dockerfiles are files that contain build instructions for how a docker image should be built. This is for offloading some of the many additional parameters one has to attach in order to run a container just how they want to.
#michaJlS I am looking for details on the base docker image that play framework uses. Regarding your question on why I need it - 1. I like to know where it is 2. I'd like to know how I can add another layer to it if need be.
1.) This is going to be based on what you want to look for. If you are looking for the ready-to-use image for docker, simply do docker images and find what image is implying what you are using. If you are looking for the raw data, you can find that in /var/lib/docker/graph. However this is pointless as you need the image ID which can only be specified by the docker images command i gave above. If the image was not built correctly the image will not appear.
2.) If you wish to modify or attach an additional layer (which by adding layers, you mean to add new files and modules to your application), you need to run the docker image (via the docker run command). Again this is moot if the docker image can not be located by the docker daemon.
While this should answer your question, I do not believe it was what you were asking. People who are concerned with docker are people who want to put applications into containers that can be ran on any platform avoiding dependency issues while maintaining relative convenience. If you are trying to use docker to place your newly created application, you are going to want to build from a dockerfile.
In short, dockerfiles are build instructions that you would normally have to specify either before running a container or when inside one. If you are stating that the sbt docker command created a docker file, your going to need to look for that file and use the docker build command in order to create an instance of your image (info should be provided in that second link). Your best course of action in order to try to get your application to run on a container is to either build it inside a container that has the environment of the running app or simply build it from a dockerfile.
The first option would be to simply be docker run -ti imagename with the image being the environment of your machine that has the running app. You can find some images on the docker hub and one that may be of interest to you is this play-framework image someone else created. That command will put you into an interactive session with the container so you can work as if you are trying to create the app within your own environment. Just don't forget to use docker commit when you are done building!
The faster method of building your image would be to do it from a dockerfile. If you know all the commands and you have all the dependencies needed to create and run your application simply put those into a file named "Dockerfile" (following the instructions and guidance of link 2) and do docker build -t NewImageName /path/to/Dockerfile. This will create an image which you can use to create containers of that image or distribute it how you see fit.
There is no specified Docker file, it's generated by plugin with predefined defaults, that you can override
http://www.scala-sbt.org/sbt-native-packager/formats/docker.html
You can take a look at docker.DockerPlugin, to get a better idea of the defaults used.
You can also look at DockerKeys for additional tasks, like
docker-generate-config
Documentation here: http://www.scala-sbt.org/sbt-native-packager/formats/docker.html indicates that the base image is dockerfile/java which doesn't seem to be in docker hub but details for the image are on github: https://github.com/dockerfile/java
The documentation also indicates that you can specify your own base image using a "dockerBaseImage" environment setting or creating a custom dockerfile http://www.scala-sbt.org/sbt-native-packager/formats/docker.html#custom-dockerfile
It's also indicated what the requirements are when using your own base image:
The image to use as a base for running the application. It should include binaries on the path for chown, mkdir, have a discoverable java binary, and include the user configured by daemonUser (daemon, by default).
I used Yocto to build a filesystem, using a .bbappend of core-image-minimal. Two questions:
how can i figure out which package is taking huge storage space on the rootfs?
I can't think of a way other than to look into the ${D} of every package and see how big its components are. There's gotta be a more systematic, and intelligent way to do that.
From what i can decipher from the manifest, there is nothing related to the size of the package that is being included.
Also, removing some of the packages I added using the IMAGE_INSTALL object, seems to remove the package but the end result of the built image doesn't show a change in its size!!
I compared the size of a particular .so on the build machine and on the installation device (a vm) and found that the size on the installation device was 20-30% of the original size seen on the build machine. Any explanation?
Thanks!
1) One way is to enable buildhistory, by adding the following to local.con
INHERIT += "buildhistory"
BUILDHISTORY_COMMIT = "1"
This will create a directory (git repo) buildhistory in your $BUILDDIR. There you'll be able to find e.g.
images/$MACHINE/eglibc/$IMAGE/installed-package-sizes.txt
That file will give you the sizes of all installed packages.
There are a lot more things you can learn from buildhistory, see buildhistory introduction
2) Where did you compare the particular .so-file? If it was from the package's ${B} (i.e. where the library is built), it's not surprising, as the installed .so-file will be stripped. The debug information is installed into -deb.rpm (as the debug info is usually useless on the target and the smaller size is of much higher importance).
With some looking inside the scripts/ subdir and some googling about some of the existing scripts, it turns out that the good people of Yocto do have these scripts properly functioning out of the box:
scripts/tiny/dirsize.py and ksize.py.
dirsize.py will give you a breakdown of pkg sizes for your rootfs; while ksize.py will give you the equivalent info for the kernel.