When building LineageOS, how to use a Vendor Snapshot? - android-source

I'm trying to build LineageOS and I need to use an older VNDK version (v30) for the vendor image because this is how the manufacturer does it and I need to be able to run a few of the manufacturer's vendor binaries for hardware support.
This is supported in AOSP with Vendor Snapshots. I've successfully built the v30 snapshot zip file according the documentation using the Lineage-18.1 branch. I then move to my Lineage-20 branch and install the vendor snapshot using the command in the documentation:
python3 development/vendor_snapshot/update.py --local /path/to/snapshot.zip --install-dir vendor/VENDORNAME/vendor_snapshot 30
I then try to build my vendorimage with:
BOARD_VNDK_VERSION=30 mka vendorimage
But I just get errors that indicate it can't find dependencies in the vendor snapshot.
error: hardware/interfaces/power/aidl/default/apex/Android.bp:46:1: dependency "android.hardware.power-service.example" of "com.android.hardware.power" missing variant:
os:android,image:vendor.30,arch:arm64_armv8-2a_cortex-a75
available variants:
os:android,image:vendor.33,arch:arm64_armv8-2a_cortex-a75 (alias to os:android,image:vendor.33,arch:arm64_armv8-2a_cortex-a75,sdk:)
os:android,image:vendor.33,arch:arm64_armv8-2a_cortex-a75,sdk:
I've confirmed android.hardware.power-service.example is in the snapshot.
I've done enough digging into this to realize that it's not an issue with just one or two dependencies, but instead it seems like the build system isn't picking up anything from the snapshot. I can see in strace that the vendor/VENDORNAME/vendor_snapshot/v30/arm64/Android.bp file is being read by the build system, but otherwise seems like the build is behaving as if the vendor snapshot wasn't installed at all.
Is there a step I have missed in installing the snapshot?
Footnote:
Here is how android.hardware.power-service.example appears in vendor/VENDORNAME/vendor_snapshot/v30/arm64/Android.bp:
vendor_snapshot_binary {
arch: {
arm64: {
src: "arch-arm64-armv8-2a/binary/android.hardware.power-service.example",
},
},
compile_multilib: "64",
init_rc: [
"configs/power-default.rc",
],
name: "android.hardware.power-service.example",
relative_install_path: "hw",
shared_libs: [
"libbase",
"libbinder_ndk",
"android.hardware.power-ndk_platform",
"libc++",
"libc",
"libm",
"libdl",
],
target_arch: "arm64",
vendor: true,
version: "30",
vintf_fragments: [
"configs/power-default.xml",
],
}

Related

Flushing cache during deployment on production system in TYPO3 (Composer based installation)

With a Composer-based TYPO3 installation (in production), when is it necessary to flush caches, and how to do it?
With a non-Composer installation + using the "Extension Manager", the flushing of caches happens automatically when an extension is installed or updated.
What is the equivalent on the command line when updating / deploying?
Is it recommended to do a (hard) cache flush in some (all) cases?
Also, what is the equivalent of doing a flush cache operation in Maintenance mode ("Install Tool") from the command line (including opcache flush)?
Example deployment workflow (this may be done with some tool, such as deployer)
fetch git repository
composer install --no-dev
... other commands to flush cash, update DB schema etc.
Have a look at this extension: https://packagist.org/packages/helhum/typo3-console it allows you to execute commands on your typo3 installations programmatically including one called cache:flush
You can then utilize composer hooks like post-autoload-dump to execute this command. So it might look something like this in your composer.json:
"scripts": {
"post-autoload-dump": [
"typo3cms install:generatepackagestates",
"typo3cms install:fixfolderstructure",
"typo3cms install:extensionsetupifpossible"
"typo3cms cache:flush"
]
}
I can't tell you if it's recommended though as I don't run composer on my production server.
If you add the extensions through composer, but still install (enable) them in the TYPO3 Extension Manager or using typo3_console, the cache will still be automatically flushed. For updated extensions, or if you install the extensions directly in PackageStates.php (through git for example) it is recommended to flush the cache and do a database compare (or extension setup).
As crs says in his answer, you can flush the cache with the typo3_console extension. You can even specify which caches you want to flush using cache:flushcache. You can also do a database compare with that extension from the command line using database:updateschema or extension setup using extension:setupactive (which does the database changes and default configuration for active extensions)
With some delay, I would like to add my setup.
I deploy with deployer now and am using gordalina/cachetool to flush the opcache.
The following snippets are simplified, only the cache flush / post deploy commands are added.
For TYPO3 v11. (it highly depends on the TYPO3 version!!!)
After (or rather during) deployment, composer typo3-post-deploy is run.
composer.json (extract):
{
"requires": {
"gordalina/cachetool": "^7.0.0",
"helhum/typo3-console": "7.1.2"
},
"scripts": {
"typo3-post-deploy": [
"#php vendor/bin/typo3 extension:setup",
"#php vendor/bin/typo3cms database:updateschema",
"#php vendor/bin/typo3cms cache:flush",
"#php vendor/bin/cachetool opcache:reset"
],
"deploy:post": [
"#typo3-post-deploy"
],
"typo3-cms-scripts": [
"#php vendor/bin/typo3cms install:fixfolderstructure"
],
"post-autoload-dump": [
"#typo3-cms-scripts"
],
"cache:flush": [
"#php vendor/bin/typo3cms cache:flush",
"#php vendor/bin/cachetool opcache:reset"
]
}

Ansible error using `win_copy`: "Unexpected failure during module execution"

I upgraded to Python 3.5 and Ansible deployment started failing, not sure if they are related, but here is the info:
Ansible version: 2.3.2
yaml file:
- name: Collect compiled DLLs for publishing
win_copy:
src: '{{ download_dir }}/tmp/xxxx/bin/Release/PublishOutput/bin/'
dest: '{{ work_dir }}\bin'
Error:
{
"failed": true,
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
Upgrade to Ansible 2.5.1 to fix this issue.
Summary from the pull request:
When win_copy copies multiple files it can sometimes delete the local tmp folder that should be used by multiple modules. This means any further modules that need to access this local tmp folder will fail.
We never came across this in ansible-test as we ran a Python module on localhost which causes the ansiballz cache to stop win_copy from successfully deleting this folder.

cf push to IBM Cloud failed : Unable to install node: improper constraint: >=4.1.0 <5.5.0

I pushed an app to the IBM Cloud after a minor change (just some data, no code or dependencies).
cat: /VERSION: No such file or directory
-----> IBM SDK for Node.js Buildpack v4.0.1-20190930-1425
Based on Cloud Foundry Node.js Buildpack 1.6.53
-----> Installing binaries
engines.node (package.json): >=4.1.0 <5.5.0
engines.npm (package.json): unspecified (use default)
**WARNING** Dangerous semver range (>) in engines.node. See: http://docs.cloudfoundry.org/buildpacks/node/node-tips.html
**ERROR** Unable to install node: improper constraint: >=4.1.0 <5.5.0
Failed to compile droplet: Failed to run all supply scripts: exit status 14
Exit status 223
Cell 155a85d3-8d60-425c-8e39-3a1183bfec2a stopping instance 5aad9d60-87d7-4153-b1ac-c3847c9a7a83
Cell 155a85d3-8d60-425c-8e39-3a1183bfec2a destroying container for instance 5aad9d60-87d7-4153-b1ac-c3847c9a7a83
Cell 155a85d3-8d60-425c-8e39-3a1183bfec2a successfully destroyed container for instance 5aad9d60-87d7-4153-b1ac-c3847c9a7a83
FAILED
Error restarting application: BuildpackCompileFailed
An older version of the app is running on the IBM Cloud already (from May 2019, I think).
So I wonder what changed so it's not working anymore.
In IBM Cloud Foundry, the Node.js version must be specified like this
"engines": {
"node": "12.x"
}
or
"engines": {
"node": "12.10.x"
}
You can also try removing this completely on your package.json file.
"engines": {
"node": "6.15.1",
"npm": "3.10.10"
},
Here's a quick reference.

install mongo with chef

I've tried to figure out how to install a mongodb 3.4 instance using this chef cookbook. Nevertheless, I've not able to get to install it.
This is my mongodb.rb file content:
node.default['mongodb']['package_version'] = '3.4'
include_recipe 'mongodb::default'
And my metadata.db: depends 'mongodb', '~> 0.16.2'.
I've tried to verify it on centos-72 platform using kitchen verify centos-72. I'm getting this message:
ERROR: yum_package[mongodb-org] (mongodb::install line 77) had an error: Chef::Exceptions::Package: Version ["3.4"] of ["mongodb-org"] not found. Did you specify both version and release? (version-release, e.g. 1.84-10.fc6)
I'm realizing this cookbook tries to add this yum_repository:
yum_repository 'mongodb' do
description 'mongodb RPM Repository'
baseurl "http://downloads-distro.mongodb.org/repo/redhat/os/#{node['kernel']['machine'] =~ /x86_64/ ? 'x86_64' : 'i686'}"
action :create
gpgcheck false
enabled true
end
And according to this mongo documentation the link repository should have to be:
https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/
instead of
"http://downloads-distro.mongodb.org/repo/redhat/os/..."
The repo you are using does not have version 3.4 available. You can verify this manually by just looking at the RPMs in http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/RPMS/

Logstash-Forwader 3.1 state file .logstash-forwarder not updating

I am having an issue with Logstash-forwarder 3.1.1 on Centos 6.5 where the state file /.logstash-forwarder is not updating as information is sent to Logstash.
I have found as activity is logged by logstash-forwarder the corresponding offset is not recorded in /.logstash-forwarder 'logrotate' file. The ./logstash-forwarder file is being recreated each time 100 events are recorded but not updated with data. I know the file has been recreated because I changed permissions to test, and permissions are reset each time.
Below are my configurations (With some actual data italicized/scrubbed):
Logstash-forwarder 3.1.1
Centos 6.5
/etc/logstash-forwarder
Note that the "paths" key does contain wildcards
{
"network": {
"servers": [ "*server*:*port*" ],
"timeout": 15,
"ssl ca": "/*path*/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/a/b/tomcat-*-*/logs/catalina.out"
],
"fields": { "type": "apache", "time_zone": "EST" }
}
]
}
Per logstash instructions for Centos 6.5 I have configured the LOGSTASH_FORWARDER_OPTIONS value so it looks like the following:
LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
Below is the resting state of the /.logstash-forwarder logrotate file:
{"/a/b/tomcat-set-1/logs/catalina.out":{"source":"/a/b/tomcat-set-1/logs/catalina.out","offset":433564,"inode":*number1*,"device":*number2*},"/a/b/tomcat-set-2/logs/catalina.out":{"source":"/a/b/tomcat-set-2/logs/catalina.out","offset":18782151,"inode":*number3*,"device":*number4*}}
There are two sets of logs that this is capturing. The offset has stayed the same for 20 minutes while activities have been occurred and sent over to Logstash.
Can anyone give me any advice on how to fix this problem whether it be a configuration setting I missed or a bug?
Thank you!
After more research I found it was announced that Filebeats is the preferred forwarder of choice now. I even found a post by the owner of Logstash-Forwarder that the program is full of bugs and is not fully supported any more.
I have instead moved to Centos7 using the latest version of the ELK stack, using Filbeats as the forwarder. Things are going much smoother now!