i have a bitbucket classic build pipeline and i need to have the ability to set the build clean option at queue time. it seems like this used to be possible via the Build.Clean variable but that has since been deprecated.
When editing a build pipeline the Clean option uses an editable drop down but anytime you try and type something, it erases what you just wrote. i would like to set this option to a variable like $(CleanBuild)
Assign build clean option at queue time
Indeed, the variable Build.Clean is already deprecated. But the document Use predefined variables provided another variable Build.Repository.Clean, which will help us to clean the Sources:
Besides, if you want to clean other options fields, like All build directories:
I do not believe there is a way to assign the clean options at queue-time. Even if we use deprecated Build.Clean variable, we still can clear Sources only.
You could check the similar thread for some more details.
Hope this helps.
Related
I have an issue related to vardeps where I am making a task dependent on some variables.
I have created some new variables e.g., NEW_VARIABLE, added them to BB_ENV_EXTRAWHITE. In some recipes, I wrote my own implementation of some tasks that are dependent on these new variables, and for this dependency to work I added e.g., do_install[vardeps] = "NEW_VARIABLE", so I am now expecting that every time I change this NEW_VARIABLE and perform e.g., bitbake recipename, the do_install task should run. I checked the task signature and I see the NEW_VARIABLE there.
Let's assum I have two possible values for this variable. When I set the variable for the first time "value1", i.e., the first build, everything works and there is no problem. When I change its value to the other value "value2" not used before and build the recipe again, the do_install will also run and no problem occurs. The problem is however, if I set the variable again to the old value "value1", and I execite bitbake recipename again. The do_install will not be re-triggered, and this leads to some wrong/old data located in work directory, and also produced in the image.
I tried setting BB_DONT_CACHE, as I understood in an old question that the problem might be that the recipe needs to be parsed again, however this did not work at all.
I do not want to always run the tasks when I perform a new build, i.e., do_install[[nostamp] = "1" so this solution can not be regarded. I just want it to run again every time I change this NEW_VARIABLE.
Is what I am expecting a normal behavior? Or Yocto does not work this way?
I faced same issue all day, also tried vardeps BB_DONT_CACHE etc but then I changed into using SRC_URI but still got same behavior. Like you I build a separate recipe "bitbake name" and look in the work image/ for the recipe output. For me it turns out that I just think that this will be updated just as if I make other new changes. But once I change input to something old the state cache kicks in and the work dir is left alone - fooling me to think it does not work properly. But when I build the final image-base it is properly populated. I guess one need to force build or disregard compiler cache to get the output this way.
We use the option on build definitions to automatically create a bug upon build failure, which is awesome... until we set up to use separate teams within the project, for different focuses (e.g. BI / Client / Server), as they are separate teams, they have their own iterations.
Whenever a bug is being automatically created, it is being assigned to the build initiator (yay!), and it is being automatically assigned to a current iteration, but it is assigning to an iteration for a team the build initiator is not in (boo!)
I'm assuming that I can force Iteration by using the "Additional fields" options within the build definition, it's unclear if the iteration path can be set here (or if it can, how)? - From what I can see in the documentation on work item fields implies the iteration path is read only?
I'm assuming that I can force Iteration by using the "Additional fields" options within the build definition, it's unclear if the iteration path can be set here (or if it can, how)?
You can use the additional fields System.IterationPath to specify the iteration path for the bugs rasied on a failed build:
After build failed (My current iteration is MyTestProject\Iteration 1):
Hope this helps.
Is it possible to get the values for custom variables being used in the build? I know they can be dumped to the console output as per what this example describe. But still want to find an easier way to archive it.
http://www.codewrecks.com/blog/index.php/2017/08/04/dump-all-environment-variables-during-a-tfs-vsts-build/
There isn’t the easier way then the way you provided to retrieve build variables, the value of variable can be changed during the build time (Logging Command), so it’s better to retrieve the variable at the end of the build (the way you provided).
Note, the secret variables can’t be output as general text.
I was wondering if it's currently possible to have an 'external' (.so/.dylib) LLVM plugin (module) pass scheduled at LTO time? The reason for wanting this is a inter-modular optimization I want to add.
I also found this topic; How to write a custom intermodular pass in LLVM?
But a separate tool is not an option for me.
Thanks
I think the most helpful thing here might be to understand how passes are run and what the state of the code is during LTO.
First of all, when optimization passes are run by the compiler, they are done as a set that has been added to a PassManager. This means that LLVM/Clang, when passed something like -O3 will create a copy of a PassManager and subsequently provide it the set of passes expected to provide O3 level of optimization. This is very different from what you are doing with an external library which must be provided manually and cannot be fit into the pass pipeline normally.
Then we have the state of things when doing LTO. During Link Time Optimization, all of the individual translation units have been consolidated and are now a single Module. This means that an optimization which runs on each function will run on every function in the code base. Similarly, a per-module optimization will run on the full Module and therefor offer Inter-Procedural Analysis/Optimization.
If you're looking to use an Intra-Modular Pass then there is no reason to do this at LTO time and instead you can simply make a ModulePass and run that on each unit.
This seems like a simple thing, but I can't find an answer in the existing questions:
How do you add a global argument to all your present and existing run or debug configurations? In my case, I need a VM argument, but I see that this could be useful for runline arguments as well.
Basically, every time I create a unit test I need to create a configuration (or run, which creates one), and then manually edit each one with the same VM argument. This seems silly for such a good tool.
This is not true. You can add the VM arguments to the JRE definition. This is exactly what it is for. I use it myself so that assertions are enabled and heap is 1024mb on every run, even future ones.
Ouch: 7-years bug, asking for running configuration template, precisely for that kind or reason.
This thread proposes an interesting workaround, based on duplicating a fake configuration based on string substitution:
You can define variables in Window->Preferences->Run/Debug->String Substitution. For example you can define a projectName_log4j variable with the
correct -Dlog4j.configuration=... value.
In a run configuration you can use ${projectName_log4j} and you don't have to remember the real value.
You can define a project-specific "empty" run configuration.
Set the project and the arguments fields in this configuration but not the main class. If you have to create a new run configuration for this project select this one and use 'Duplicate' from its popup-menu to copy this configuration.
You have to simply set the main class and the program arguments.
Also you can combine both solutions: use a variable and define an "empty"
run configuration which use this variable. The great advantage in this case
is when you begin to use a different log4j config file you have to change
only the variable declaration.
Not ideal, but it may alleviate your process.