I am trying to specify a mount point in a kubernetes deployment descriptor and for that I need there to be a directory
volumeMounts:
- name: volume-mount
mountPath: /dev/bus/usb/003/005
to correspond to:
volumes:
- name: my-host-volume
hostPath:
path: /dev/bus/usb/003/005
How do I create this using jib?
UPDATE: newer Jib versions have the feature to allow specifying a copy destination directory when using <extraDirectories>. You no longer have to manually prepare the target directory structure beforehand.
Create an empty directory <project root>/src/main/jib/dev/bus/usb/003/005 in your source repo.
Details
Jib allows adding arbitrary extra files and directories using the <extraDirectories> (Maven / Gradle) configuration. Files and (sub-)directories under <extraDirectories> will be recursively copied into the root directory of the image. By default, <project root>/src/main/jib is one such "extra directory", so you can simply create an empty directory with the structure you like.
You can also tune the permissions of files and directories using <permissions> if you want.
Related
Swift, Xcode macOs project
I have supported configuration of assets, basically all my assets are contained on remote repo,
I am using Jenkins as CI, in one of the stages I clone all the contents of config-repo to PROJ/Resources/Assets.xcassets
Script example:
sh 'cp -a configuration/meta_configuration/assets PROJ/Resources/Assets.xcassets || echo "copy failed"'
(configuration is cloned dir of config-repo)
So now I have to make my config team to create folders inside /meta_configuration/assets folder using template: asset_name.imageset - Config.json inside it: add one - three .png files, add them to .json and upload into the directory.
I am looking for some way of automatically create .imageset from given .png (Jenkins pipeline script maybe) and overall advice how we should approach this step as currently we experiencing that a lot of footwork is needed to add/support assets into the repo.
Q: How to automatically create .imagest from given .png file(s)?
edit: added project specs
I have a Yocto recipe to create the rootfs. I want to create another ext4 file system that includes only a few files, and merge it into the original image as a separate partition with a special mount point. wks file support this: part /myfiles --source rawcopy ... that copies files from an existing image, however, I do not know how I could generate this ext4 image in the same Yocto build step.
The Yocto - Create and populate a separate /home partition almost works, however, I want to change file permissions that does not work with this solution.
How is it possible to tackle this?
AFAICT I can tell AppVeyor to wrap an entire folder into a zip file artifact, as in the following example:
artifacts:
- path: logs
name: test logs
type: zip
which will push all the files in the logs subfolder to the logs.zip ZIP file.
I want to give the generated ZIP file a different name. How can I do that?
Issue number 929 in the community support repository suggests that this is currently not supported declaratively. What you can do is rename the artifact in the after_build step as outlined in the issue:
after_build:
- appveyor PushArtifact local-file.zip -FileName remote-file-%appveyor_build_version%.zip
It appears that the name attribute controls the name of the ZIP file as well. So if I want the ZIP file to be named foo.zip, I can write the following in the appveyor.yml:
artifacts:
- path: logs
name: foo
type: zip
And so I've done -- see here and here.
I am new to Azure DevOps. I tried many ways but nothing helped. I am trying to create a pipeline which has Copy Files Task. I have folder structure like below
Bin
Common
abc.dll
Staging
Bin
Common
I want to copy abc.dll from Bin\Common to Staging\Bin\Common
In my Copy Files Task I am giving below
Source: Bin/Common
Contents: *.dll
Target Folder: Staging/Bin/Common
In Advanced:
Clean Target Folder: Check
Overwrite: Check
The Copy File Task succeeds and when I go to my Repo I donot see abc.dll in Staging\Bin\Common folder. In my Copy File Task log I see
Copying D:\a\1\s\Bin\Common\abc.dll to Staging\Bin\Common\abc.dll
I guess it must be
Copying D:\a\1\s\Bin\Common\abc.dll to D:\a\1\s\Staging\Bin\Common\abc.dll
Thanks in advance.
SOLUTION
Thanks to 4c74356b41 for pointing me in right direction. I accepted and marked as answer. As suggested, I created variable and used it like below
Variable Name: BinCommonStagingFolder
Variable Value: $(Build.Repository.LocalPath)\Staging\Bin\Common\
I used the variable in my Copy Files Task like below to copy only files which I need not all files
Source: Bin/Common
Contents:
abc.dll
abc.pdb
Target Folder: $(BinCommonStagingFolder)
In Advanced:
Clean Target Folder: Check
Overwrite: Check
i guess you should add full path, you can use build varible for that:
Target Folder: $(Build.Repository.LocalPath)\Staging\Bin\Common\
this would reference the root of the repo you checked out
For me, the issue was because I migrated from a Windows pipeline to a Linux one and path separators "\" on Windows hadn't been updated to "/" causing relative paths to be wrongly interpreted.
I have a volume declaration in a service:
volumes:
- .:/var/www
The service's container uses an entrypoint shell script to prepare resources (npm install and gulp build). It runs fine in Jet but the files created by the entrypoint aren't ever detected when it runs for real.
What is different about volumes on the actual service?
The biggest difference between your local environment and the remote environment is that the build machines are created new every time.
Locally, you probably have npm modules and build files. Remotely, however, you won't have access to those. A way to test this with jet, is to download the repository and run directly without any initial build processes - just jet steps.
Container Files
- var
|- www
|- node_modules
|- //installed modules
|- build
|- //build files
|- src
|- //source files
Build Machine Files
- root_folder
|- src
|- //source files
The difficulty with volumes during the container runtime is that what ever is in your root directory will override what was created during the image build.
The volume mapping remotely is, in most cases, unnecessary. You want to test the container in complete isolation.
I would recommend removing the volumes directive in the codeship-services.yml file - this should solve your issue.