I have a image recipe that inherits the core image (inherit core-image) and ads some additional packages of its own.
I'm doing an update to a new version of poky. Is there a way to see what packages (relevant to my image) have been affected with the update?
Yes and No.
Yes, in the meaning of: the information is certainly available.
No, as this information is not directly readable for a variety of reasons:
The term "relevant" to my image is highly unspecific. Does it mean that a package needs to be recompiled? Does it mean that a package comes in a new version? Does it mean that something in your image dependendency tree changed? Or does it mean that something in your build-time dependencies changed?
For the recompilation, it will almost certainly apply to every single package. Reason: poky releases usually bring a new gcc version, and this triggers recompilation for basically everything.
Now for the good news:
[MACHINE refers to your specific machine type, IMAGE to the image recipe in question]
Yes for packages in your image: in your
tmp/deploy/images/MACHINE/IMAGE.manifest
is a concise list of packages and their respective version in your image. Just diff the file between before and after the update, and there you go!
Yes for dependencies: do a
bitbake -g IMAGE
before and after the update, keeping the resulting
recipes-depends.dot, task-depends.dot
files for comparison. Now diffing those will give you precise (albeit probably not well formatted) information what has changed in which way and will affect your image build.
Related
I am using Jib to pull a base image, add my wrapper java code to it, and build my image on top of that. Due to the widely known log4j CVE in December 2021, we are looking for a way to remove the vulnerable classes. (Now more CVEs are found in 2022, one of them has a score of 10.0, the highest possible. See https://www.cvedetails.com/vulnerability-list/vendor_id-45/product_id-37215/Apache-Log4j.html)
The base image is near EOL, so the provider answered that they would not release a new version; besides, log4j 1.x also reached EOL long before. But the current situation is that we have no plan of upgrading the base image to next version, so removing the classes seem to be the only way now.
The base image will use /opt/amq/bin/launch.sh as entrypoint. And I have found that I can use customized entrypoint to run a script before that, which removes the classes. Like <entrypoint>/opt/amq/bin/my_script.sh</entrypoint>, and in that I have run_fix.sh && /opt/amq/bin/launch.sh.
Then I realized that even this would work by mitigating the risk when the application is actually running, the vulnerability scan(part of security process) will still raise alarms while examining the image binary, as this is a static process done before the image is uploaded to the docker registry for production, way before actually running it. They can only be removed at the moment when the application runs, aka at runtime.
Can jib pre-process the base image while doing Maven build(mvn clean install -Pdocker-build) instead of only allowing it at runtime? According to what I have read, I understand it's a big NO, and there's no plugin for it yet.
By the design of container images, it is impossible for anyone or any tool to physically remove files from an already existing container image. Images are immutable. The best you can try is "mark deletion" with some special "whiteout" file (.wh.xyz), which a container runtime will hide the target files at runtime.
However, I am not sure if your vulnerability scanner will take the effect of whiteout files into account during scanning. Hopefully it does. If it doesn't, the only option I can think of is to re-create your own base image.
Take a look at this Stack Overflow answer for more details.
I am trying to download MRTK by following https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/GettingStartedWithTheMRTK.html. But on the Assests folder on GitHub for MRTK, I cannot find the two packages below mentioned:
Microsoft.MixedRealityToolkit.Unity.Examples.unitypackage
Microsoft.MixedRealityToolkit.Unity.Foundation.unitypackage
Did I miss anything simple?
The exact naming of the packages is going to be different from release to release because the version number increases each time a new version is published.
The docs will say something like
"Microsoft.MixedRealityToolkit.Unity.Examples.unitypackage"
But the latest right now actually is named:
"Microsoft.MixedReality.Toolkit.Unity.Examples-v2.0.0-RC2.1.unitypackage"
It's kinda done this way to reduce the number of things that have to get changed with each release (i.e. making sure that every single instance of the naming of a version stays up to date can be somewhat risk prone in terms of missing things, so when possible if we can just say "grab the latest package" or "grab the thing that is the examples package" it also helps to reduce the number of version mismatches out there).
So to be super clear here, basically go to the releases page:
https://github.com/microsoft/MixedRealityToolkit-Unity/releases
Scroll to the bottom of the the latest release and expand the Assets expando and get the two unitypackages:
https://github.com/microsoft/MixedRealityToolkit-Unity/releases/download/v2.0.0-RC2.1/Microsoft.MixedReality.Toolkit.Unity.Examples-v2.0.0-RC2.1.unitypackage
https://github.com/microsoft/MixedRealityToolkit-Unity/releases/download/v2.0.0-RC2.1/Microsoft.MixedReality.Toolkit.Unity.Foundation-v2.0.0-RC2.1.unitypackage
Would it be helpful to document the packages as
Microsoft.MixedRealityToolkit.Unity.Foundation[Version].unitypackage
I have a large repository that builds many files such as dll:s and exe:s. These files build with resources information attached in the files with meta-information such as file version and company name etc.
If I wish to make an upgrade package of updates which includes only files that have been updated. I have to know whether any changes has been made to built binaries/executables. This causes two problems.
Version control in software like perforce handle source code great but source code in one project does not necessarily mean it doesn't end up in another projects dll.
As we build binaries with version information at the compilation the files "looks" different if they have different fileversion numbers but can be equal in the actual source code.
So basically, how do you manage file versions of binaries/executables in an automated process in order to determine which files that have been built are different from previous version?
One option could be to do the build process twice. Where the first build is with the old version number to compare and then build with the new version number and use. Does anyone have better suggestion or applications to recommend?
If possible, my main suggestion would be to improve the version string generation. If you can't determine from the version string what source the binary was built from, what's the point of even having a version string? Ideally if the source doesn't change, the version string doesn't change either.
If it's not possible for your actual binaries to have useful version strings, my suggestion would be to embed that information (i.e. what source files were used and what the latest change to those source files was) in the change description when you submit the binaries -- that way you can at least look it up after the fact, and can use that information to compare binaries.
In Pharo 2.0 i started with Classes in one Package/Category (I'm not sure, what is the right term in Pharo at the moment). I have an identically named Monticello package which i contribute to.
Now i split the Package/Category:
MyPackage
becomes:
MyPackage-Core
MyPackage-AddOns
What is the intended way to manage these Packages/Categories with Monticello now? Is there a way to automatically split the Monticello packages accordingly? (I created some mess doing it manually and ended up starting in a new image and manually filing in the classes and then creating new Monticello packages)
I found this on the pharo developers mailinglist (splitting MC packages):
Closing the eyes and restarting from scratch. We did that too with the
Seaside packages at some point. [...]
We did the same for Moose. We kind of followed this process:
create new sub-package XYZ-Sub*
move classes from XYZ to XYZ-Sub*
repeat until all classes and extensions from XYZ are moved away to subpackages
add the Monticello repository to XYZ-Sub*
commit all XYZ-Sub* packages. Make sure that there are no categories without packages left behind (in other words to not lose
code)
save the image
load in a fresh image
if problems appear, and they always appear because it is manual work, go to the previous image and recommit
You might want to take a look at the Monticello manual, or perhaps it is easier to read the section on Monticello in Pharo by Example. Deeper info is in the draft chapters in volume 2
Using online interfaces to a version control system is a nice way to have a published location for the most recent versions of code. For example, I have a LaTeX package here (which is released to CTAN whenever changes are verified to actually work):
http://github.com/wspr/pstool/tree/master
The package itself is derived from a single file (in this case, pstool.tex) which, when processed, produces the documentation, the readme, the installer file, and the actual files that make up the package as it is used by LaTeX.
In order to make it easy for users who want to download this stuff, I include all of the derived files mentioned above in the repository itself as well as the master file pstool.tex. This means that I'll have double the number of changes every time I commit because the package file pstool.sty is a generated subset of the master file.
Is this a perversion of version control?
#Jon Limjap raised a good point:
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
That's really the crux of the matter in this case. Yes, released versions of the package can be obtained from elsewhere. So it does really make more sense to only version the non-generated files.
On the other hand, #Madir's comment that:
the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes
is also rather pertinent in that if a user finds a bug and I fix it immediately, they can then head over to the repository and grab the file that's necessary for them to continue working without having to run any "installation" steps.
And this, I think, is the more important use case for my particular set of projects.
We don't version files that can be automatically generated using scripts included in the repository itself. The reason for this is that after a checkout, these files can be rebuild with a single click or command. In our projects we always try to make this as easy as possible, and thus preventing the need for versioning these files.
One scenario I can imagine where this could be useful if 'tagging' specific releases of a product, for use in a production environment (or any non-development environment) where tools required for generating the output might not be available.
We also use targets in our build scripts that can create and upload archives with a released version of our products. This can be uploaded to a production server, or a HTTP server for downloading by users of your products.
I am using Tortoise SVN for small system ASP.NET development. Most code is interpreted ASPX, but there are around a dozen binary DLLs generated by a manual compile step. Whilst it doesn't make a lot of sense to have these source-code versioned in theory, it certainly makes it convenient to ensure they are correctly mirrored from the development environment onto the production system (one click). Also - in case of disaster - the rollback to the previous step is again one click in SVN.
So I bit the bullet and included them in the SVN archive - the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes.
Not necessarily, although best practices for source control advise that you do not include generated files, for obvious reasons.
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
Normally, derived files should not be stored in version control. In your case, you could build a release procedure that created a tarball that includes the derived files.
As you say, keeping the derived files in version control only increases the amount of noise you have to deal with.
In some cases we do, but it's more of a sysadmin type of use case, where the generated files (say, DNS zone files built from a script) have intrinsic interest in their own right, and the revision control is more linear audit trail than branching-and-tagging source control.