What is the best way to retrieve a list of all core and custom entry points? - sugarcrm

I work quite a bit with custom entry points in SugarCRM and SuiteCRM. I was wondering if there is a quick way to list every entrypoint and the file associated with it?

Entry points can be defined in so many places, the key here will be to find all of the definition files and then cat each one of them to find what they're doing.
Possible Entry Point Locations
custom Extension-Framework, application-wide entry points:
$ ls -1d custom/Extension/application/Ext/EntryPointRegistry
custom Extension-Framework, module-specific entry points:
$ ls -1d custom/Extension/modules/*/Ext/EntryPointRegistry
custom non-ext-framework (legacy), application-wide entry points
$ ls custom/include/MVC/Controller/entry_point_registry.php
custom non-ext-framework (legacy) module-specific entry points are housed in custom/modules...
$ ls custom/modules/*/entry_point_registry.php
These can be in the root level module dir as well. I don't think any out of the box modules ever did this, but you could do it with a custom module
$ ls modules/*/entry_point_registry.php
Out-of-the-box application wide entry points...
include/MVC/Controller/entry_point_registry.php

Related

Is there a way to configure pytest_plugins from a pytest.ini file?

I may have missed this detail but I'm trying to see if I can control the set of plugins made available through the ini configuration itself.
I did not find that item enumerated in any of the configurable command-line options nor in any of the documentation around the pytest_plugins global.
The goal is to reuse a given test module with different fixture implementations.
#hoefling is absolutely right, there is actually a pytest command line argument that can specify plugins to be used, which along with the addopts ini configuration can be used to select a set of plugin files, one per -p command.
As an example the following ini file selects three separate plugins, the plugins specified later in the list take precedence over those that came earlier.
projX.ini
addopts =
-p projX.plugins.plugin_1
-p projX.plugins.plugin_2
-p projY.plugins.plugin_1
We can then invoke this combination on a test module with a command like
python -m pytest projX -c projX.ini
A full experiment is detailed here in this repository
https://github.com/jxramos/pytest_behavior/tree/main/ini_plugin_selection

Google Cloud Storage : What is the easiest way to update timestamp of all files under all subfolders

I have datewise folders in the form of root-dir/yyyy/mm/dd
under which there are so many files present.
I want to update the timestamp of all the files falling under certain date-range,
for example 2 weeks ie. 14 folders, so that these these files can be picked up by my file-Streaming Data Ingestion process.
What is the easiest way to achieve this?
Is there a way in UI console? or is it through gsutil?
please help
GCS objects are immutable, so the only way to "update" the timestamp would be to copy each object on top of itself, e.g., using:
gsutil cp gs://your-bucket/object1 gs://your-bucket/object1
(and looping over all objects you want to do this to).
This is a fast (metadata-only) operation, which will create a new generation of each object, with a current timestamp.
Note that if you have versioning enabled on the bucket doing this will create an extra version of each file you copy this way.
When you say "folders in the form of root-dir/yyyy/mm/dd", do you mean that you're copying those objects into your bucket with names like gs://my-bucket/root-dir/2016/12/25/christmas.jpg? If not, see Mike's answer; but if they are named with that pattern and you just want to rename them, you could use gsutil's mv command to rename every object with that prefix:
$ export BKT=my-bucket
$ gsutil ls gs://$BKT/**
gs://my-bucket/2015/12/31/newyears.jpg
gs://my-bucket/2016/01/15/file1.txt
gs://my-bucket/2016/01/15/some/file.txt
gs://my-bucket/2016/01/15/yet/another-file.txt
$ gsutil -m mv gs://$BKT/2016/01/15 gs://$BKT/2016/06/20
[...]
Operation completed over 3 objects/12.0 B.
# We can see that the prefixes changed from 2016/01/15 to 2016/06/20
$ gsutil ls gs://$BKT/**
gs://my-bucket/2015/12/31/newyears.jpg
gs://my-bucket/2016/06/20/file1.txt
gs://my-bucket/2016/06/20/some/file.txt
gs://my-bucket/2016/06/20/yet/another-file.txt

Is it possible to use config fragments with Buildroot's .config?

I'm using Buildroot as a submodule, and I want to reuse existing in-tree defconfigs with a few modification of my own.
I'd like to store just the modified options in a config fragment, just like I can do with BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES for the Linux kernel config.
Right now I'm doing something like:
cd buildroot
make BR2_EXTERNAL="$(pwd)/../mypackage" qemu_x86_64_defconfig
echo '
BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES="../kernel_config_fragment"
BR2_ROOTFS_OVERLAY="../rootfs_overlay"
' >> .config
make
Is there a nicer way to avoid that echo with a config fragment, just like I'm using for the Linux kernel config fragment? I'd expect something like:
make BR2_CONFIG_FRAG=br_config_frag
where br_config_frag contains the lines:
BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES="../kernel_config_fragment"
BR2_ROOTFS_OVERLAY="../rootfs_overlay"
and then I'd be able to write just:
make -C buildroot BR2_CONFIG_FRAG=br_config_frag qemu_x86_64_defconfig all
Here's the full example repo.
Edit
One slight improvement is to put the "config fragment" in a separate file buildroot_config_fragment:
BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES="../kernel_config_fragment"
BR2_ROOTFS_OVERLAY="../rootfs_overlay"
and then cat that:
cat ../buildroot_config_fragment >> .config
First side note: your script should run make olddefconfig before make, so that any new options are set to their default value instead of being asked for interactively.
You could simplify the script a little by doing:
cat configs/qemu_x86_64_defconfig br_config_frag > .config
make olddefconfig
You can also use the script support/kconfig/merge_config.sh from the kconfig infrastructure. However, that script internally uses make alldefconfig which currently doesn't work - you need a patch for that.
If you would like to add support for BR2_CONFIG_FRAG to the Buildroot infrastructure, feel free to send a patch to the Buildroot mailing list!
I asked on the IRC, and an user who seems to be Yann E. Morin, who seems to be an active developer, said it is not possible currently.
Arnout's make alldefconfig patch is now merged in buildroot as of 26 Jul 2017
(https://github.com/buildroot/buildroot/commit/dab80981d15979eab3aea28a33694396635a52a1).
This means you can now do:
./support/kconfig/merge_config.sh configs/qemu_x86_64_defconfig fragment1.config fragment2.config
This will use qemu_x86_64_defconfig as the base and add modifications given in the listed fragment config files. The tool will also show nice warnings if you override items.

Bitbake: How to list all recipe and append files used in an image?

I'm using OpenEmbedded-Core and have created a custom layer with priority 6. Months of development have gone by, and now I want to increase my layer's priority to 8 because an append file from another layer with priority 7 is interfering with an append file I'm adding in my layer.
My question is, how can I generate a list of recipes and .bbappend files used in an image?
I want to generate the list both before and after I make the priority change so that I can compare them (with a difftool hopefully) to see if any unexpected side-effects occurred, like an important append file from the other layer getting ignored potentially.
I'm using the angstrom-v2014.12-yocto1.7 branch of the Angstrom distribution.
[EDIT]
I'm now primarily just interested in determining how to list which .bbappend files are actually used by my image at this point.
A list of packages can be viewed using "bitbake -g your-image-name" as suggested by #pnxs, or from the .manifest file (which is what I like to use) which in my case is located under deploy/glibc/images/imagename/. I originally asked how a list of "recipe files" could be generated, but I think a list of packages is sufficient.
Regarding the .bbappends though, I had a case where my own .bbappend was ignored due to layer priorities. I made a change to my layer priorities and now want to see if that caused any .bbappend files anywhere else in my image to get ignored. As I understand it, using "bitbake-layers show-appends" as suggested lists all .bbappends present rather than just those which are actually used in the creation of an image, so this doesn't do what I'm looking for.
Try using "bitbake-layers show-appends" to see what bbappends are used. But that will only work on a per-recipe basis. But that might give you the information you need to understand the priorities.
You can do a "bitbake -g your-image-name" which creates some dot-files in the current directory.
The file "pn-depends.dot" contains a list of package-names (pn) and the dependencies between them.
When you take the first part of the file where all packages are listed, you see for example:
"busybox" [label="busybox :1.23.1-r0.2\n/home/user/yocto/sources/poky/meta/recipes-core/busybox/busybox_1.23.1.bb"]
"base-files" [label="base-files :3.0.14-r89\n/home/user/yocto/sources/poky/meta/recipes-core/base-files/base-files_3.0.14.bb"]
So you got a list of all packages used by your image and the corresponding recipe-file.
To see which of the recpies are extended by bbappend you have to get the list of bbappends with "bitbake-layers show-appends" and look up the appends of every recipe. You can write a little python-program that can do that for you.
Try the following:
Show all recipes
bitbake-layers show-recipes
Show .bb file of a recipe
RECIPE_NAME="linux-yocto"
bitbake -e $RECIPE_NAME | grep ^FILE=
Try the below command
bitbake -g image-name && cat pn-depends.dot | grep -v -e '-native' | grep -v digraph | grep -v -e '-image' | awk '{print $1}' | sort | uniq

gsutil - is it possible to list only folders?

Is it possible to list only the folders in a bucket using the gsutil tool?
I can't see anything listed here.
For example, I'd like to list only the folders in this bucket:
Folders don't actually exist. gsutil and the Storage Browser do some magic under the covers to give the impression that folders exist.
You could filter your gsutil results to only show results that end with a forward slash but this may not show all the "folders". It will only show "folders" that were manually created (i.e., not implicitly exist because an object name contains slashes):
gsutil ls gs://bucket/ | grep -e "/$"
Just to add here, if you directly drag a folder tree to google cloud storage web GUI, then you don't really get a file for a parent folder, in fact each file name is a fully qualified url e.g. "/blah/foo/bar.txt" , instead of a folder blah>foo>bar.txt
The trick here is to first use the GUI to create a folder called blah and then create another folder called foo inside (using the button in the GUI) and finally drag the files in it.
When you now list the file you will get a separate entry for
blah/
foo/
bar.txt
rather than only one
blah/foo/bar.txt