Flushing cache during deployment on production system in TYPO3 (Composer based installation) - typo3

With a Composer-based TYPO3 installation (in production), when is it necessary to flush caches, and how to do it?
With a non-Composer installation + using the "Extension Manager", the flushing of caches happens automatically when an extension is installed or updated.
What is the equivalent on the command line when updating / deploying?
Is it recommended to do a (hard) cache flush in some (all) cases?
Also, what is the equivalent of doing a flush cache operation in Maintenance mode ("Install Tool") from the command line (including opcache flush)?
Example deployment workflow (this may be done with some tool, such as deployer)
fetch git repository
composer install --no-dev
... other commands to flush cash, update DB schema etc.

Have a look at this extension: https://packagist.org/packages/helhum/typo3-console it allows you to execute commands on your typo3 installations programmatically including one called cache:flush
You can then utilize composer hooks like post-autoload-dump to execute this command. So it might look something like this in your composer.json:
"scripts": {
"post-autoload-dump": [
"typo3cms install:generatepackagestates",
"typo3cms install:fixfolderstructure",
"typo3cms install:extensionsetupifpossible"
"typo3cms cache:flush"
]
}
I can't tell you if it's recommended though as I don't run composer on my production server.

If you add the extensions through composer, but still install (enable) them in the TYPO3 Extension Manager or using typo3_console, the cache will still be automatically flushed. For updated extensions, or if you install the extensions directly in PackageStates.php (through git for example) it is recommended to flush the cache and do a database compare (or extension setup).
As crs says in his answer, you can flush the cache with the typo3_console extension. You can even specify which caches you want to flush using cache:flushcache. You can also do a database compare with that extension from the command line using database:updateschema or extension setup using extension:setupactive (which does the database changes and default configuration for active extensions)

With some delay, I would like to add my setup.
I deploy with deployer now and am using gordalina/cachetool to flush the opcache.
The following snippets are simplified, only the cache flush / post deploy commands are added.
For TYPO3 v11. (it highly depends on the TYPO3 version!!!)
After (or rather during) deployment, composer typo3-post-deploy is run.
composer.json (extract):
{
"requires": {
"gordalina/cachetool": "^7.0.0",
"helhum/typo3-console": "7.1.2"
},
"scripts": {
"typo3-post-deploy": [
"#php vendor/bin/typo3 extension:setup",
"#php vendor/bin/typo3cms database:updateschema",
"#php vendor/bin/typo3cms cache:flush",
"#php vendor/bin/cachetool opcache:reset"
],
"deploy:post": [
"#typo3-post-deploy"
],
"typo3-cms-scripts": [
"#php vendor/bin/typo3cms install:fixfolderstructure"
],
"post-autoload-dump": [
"#typo3-cms-scripts"
],
"cache:flush": [
"#php vendor/bin/typo3cms cache:flush",
"#php vendor/bin/cachetool opcache:reset"
]
}

Related

When building LineageOS, how to use a Vendor Snapshot?

I'm trying to build LineageOS and I need to use an older VNDK version (v30) for the vendor image because this is how the manufacturer does it and I need to be able to run a few of the manufacturer's vendor binaries for hardware support.
This is supported in AOSP with Vendor Snapshots. I've successfully built the v30 snapshot zip file according the documentation using the Lineage-18.1 branch. I then move to my Lineage-20 branch and install the vendor snapshot using the command in the documentation:
python3 development/vendor_snapshot/update.py --local /path/to/snapshot.zip --install-dir vendor/VENDORNAME/vendor_snapshot 30
I then try to build my vendorimage with:
BOARD_VNDK_VERSION=30 mka vendorimage
But I just get errors that indicate it can't find dependencies in the vendor snapshot.
error: hardware/interfaces/power/aidl/default/apex/Android.bp:46:1: dependency "android.hardware.power-service.example" of "com.android.hardware.power" missing variant:
os:android,image:vendor.30,arch:arm64_armv8-2a_cortex-a75
available variants:
os:android,image:vendor.33,arch:arm64_armv8-2a_cortex-a75 (alias to os:android,image:vendor.33,arch:arm64_armv8-2a_cortex-a75,sdk:)
os:android,image:vendor.33,arch:arm64_armv8-2a_cortex-a75,sdk:
I've confirmed android.hardware.power-service.example is in the snapshot.
I've done enough digging into this to realize that it's not an issue with just one or two dependencies, but instead it seems like the build system isn't picking up anything from the snapshot. I can see in strace that the vendor/VENDORNAME/vendor_snapshot/v30/arm64/Android.bp file is being read by the build system, but otherwise seems like the build is behaving as if the vendor snapshot wasn't installed at all.
Is there a step I have missed in installing the snapshot?
Footnote:
Here is how android.hardware.power-service.example appears in vendor/VENDORNAME/vendor_snapshot/v30/arm64/Android.bp:
vendor_snapshot_binary {
arch: {
arm64: {
src: "arch-arm64-armv8-2a/binary/android.hardware.power-service.example",
},
},
compile_multilib: "64",
init_rc: [
"configs/power-default.rc",
],
name: "android.hardware.power-service.example",
relative_install_path: "hw",
shared_libs: [
"libbase",
"libbinder_ndk",
"android.hardware.power-ndk_platform",
"libc++",
"libc",
"libm",
"libdl",
],
target_arch: "arm64",
vendor: true,
version: "30",
vintf_fragments: [
"configs/power-default.xml",
],
}

JPAM Configuration for Apache Drill

I'm trying to configure PLAIN authentification based on JPAM 1.1 and am going crazy since it doesnt work after x times checking my syntax and settings. When I start drill with cluster-id and zk-connect only, it works, but with both options of PLAIN authentification it fails. Since I started with pam4j and tried JPAM later on, I kept JPAM for this post. In general I don't have any preferences. I just want to get it done. I'm running Drill on CentOS in embedded mode.
I've done anything required due to the official documentation:
I downloaded JPAM 1.1, uncompressed it and put libjpam.so into a specific folder (/opt/pamfile/)
I've edited drill-env.sh with:
export DRILLBIT_JAVA_OPTS="-Djava.library.path=/opt/pamfile/"
I edited drill-override.conf with:
drill.exec: {
cluster-id: "drillbits1",
zk.connect: "local",
impersonation: {
enabled: true,
max_chained_user_hops: 3
},
security: {
auth.mechanisms: ["PLAIN"],
},
security.user.auth: {
enabled: true,
packages += "org.apache.drill.exec.rpc.user.security",
impl: "pam",
pam_profiles: [ "sudo", "login" ]
}
}
It throws the subsequent error:
Error: Failure in starting embedded Drillbit: org.apache.drill.exec.exception.DrillbitStartupException: Problem in finding the native library of JPAM (Pluggable Authenticator Module API). Make sure to set Drillbit JVM option 'java.library.path' to point to the directory where the native JPAM exists.:no jpam in java.library.path (state=,code=0)
I've run that *.sh file by hand to make sure that the necessary path is exported since I don't know if Drill is expecting that. The path to libjpam should be know known. I've started Sqlline with sudo et cetera. No chance. Documentation doesn't help. I don't get it why it's so bad and imo incomplete. Sadly there is 0 explanation how to troubleshoot or configure basic user authentification in detail.
Or do I have to do something which is not told but expected? Are there any Prerequsites concerning PLAIN authentification which aren't mentioned by Apache Drill itself?
Try change:
export DRILLBIT_JAVA_OPTS="-Djava.library.path=/opt/pamfile/"
to:
export DRILL_JAVA_OPTS="$DRILL_JAVA_OPTS -Djava.library.path=/opt/pamfile/"
It works for me.

TYPO3 Upgrade Wizard Fails on DatabaseRowsUpdateWizard

i updated a project from TYPO3 7.6 to ^8 by following the official guide. latest steps were the composer update. i removed extensions/packages not compatible with ^8 and updated the ones available for ^8. im able to reach the install tool, the TYPO3 admin backend and the frontend (with errors).
so i ended up at the step were i should use the upgrade wizards provided by the install tool. i completed a few wizards without any issues but then faces a pretty one - first i tried to run DatabaseRowsUpdateWizard within the install tool but that failed with a memory error - i tried the cli approach with
php -d memory_limit=-1 vendor/bin/typo3cms upgrade:wizard DatabaseRowsUpdateWizard
the processing worked but it ended up with following error:
[ Helhum\Typo3Console\Mvc\Cli\FailedSubProcessCommandException ]
#1485130941: Executing command "upgrade:subprocess" failed (exit code: "1")
thrown in file vendor/helhum/typo3-console/Classes/Install/Upgrade/UpgradeHandling.php
in line 284
the command initially failed is:
'/usr/bin/php7.2' 'vendor/bin/typo3cms' 'upgrade:subprocess' '--command' 'executeWizard' '--arguments' 'a:3:{i:0;s:24:"DatabaseRowsUpdateWizard";i:1;a:0:{}i:2;b:0;}'
and here is the subprocess exception:
[ Sub-process exception: TYPO3\CMS\Core\Resource\Exception\InvalidPathException ]
#1320286857: File ../disclaimer_de.html is not valid (".." and "//" is not allowed in path).
thrown in file typo3/sysext/core/Classes/Resource/Driver/AbstractHierarchicalFilesystemDriver.php
in line 71
im pretty much lost and dont know were to start to get this fixed - help is much appreciated
Issues like these usually stem from broken URLs in RTE fields as can be seen in the error output:
File ../disclaimer_de.html is not valid (".." and "//" is not allowed in path)
In this case you should manually prepare the database and run SQL statements which replace the broken/obsolete ../ prefix from all affected records. An example query:
UPDATE tt_content
SET bodytext = REPLACE(bodytext, 'href="../', 'href="')
WHERE bodytext LIKE '%href="../';
Notice that this query is very basic and can destroy your data, so make sure you run some SELECT statements first to make sure nothing breaks. Also keep a backup of your database at hand.
Sometime, custom or TER extension also have RTE such as tt_news where you might come across same issue. To fix that, you just need to run the same query with the according table.

App works locally but not on Heroku (Application Error); Using Nodemon and Webpack

Alright, I've tried to look up my question on StackOverflow but I can't find something that helps me since everything I've tried doesn't have any effect on the result (Application error).
So I'm really stumped because the app works perfectly fine on my localhost, but I can't get it to work on Heroku, it just gives me a Application error so I have no idea what the issue is.
So on my package.JSON file looks like this:
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "nodemon --use_strict index.js",
"bundle": "webpack"
},
And I've already tried to change "nodemon" to "node" and gotten rid of --use_strict and ran it on local host and it still works perfectly fine but the Heroku app still gives me a Application Error.
index.js the only thing that I can think of being bad (changed it and it runs here):
// start the server
app.listen(3000, () => {
console.log('Server is running.');
});
webpack.config.js:
const path = require('path');
module.exports = {
// the entry file for the bundle
entry: path.join(__dirname, '/client/src/app.jsx'),
// the bundle file we will get in the result
output: {
path: path.join(__dirname, '/client/dist/js'),
filename: 'app.js',
},
module: {
// apply loaders to files that meet given conditions
loaders: [{
test: /\.jsx?$/,
include: path.join(__dirname, '/client/src'),
loader: 'babel-loader',
query: {
presets: ["react", "es2015"]
}
}],
},
// start Webpack in a watch mode, so Webpack will rebuild the bundle on changes
watch: true
};
It deployed properly after git push heroku master:
https://c1.staticflickr.com/3/2873/33519283263_3d9a711311_z.jpg
I'm pretty much trying to make this app work on Heroku:
https://vladimirponomarev.com/blog/authentication-in-react-apps-creating-components
I think a possible problem might be that you have to run "run bundle" on one shell and "npm start" in the other shell.
Another thing, this app had a lot of things that were npm installed manually in node_modules, which Heroku does not accept if I try to push it on github and will crash, so I'm thinking that might be an issue as well, though I have no idea how to get around that.
This also uses Express and Mongodb, and I added my mongodb info into the index.json file and ran the application, and it worked perfectly fine and after checking the db, the correct info was also inside it, so it's not that either.
You should use process.env.PORT instead of custom port 3000.
Check that you have a mongodb addon purchased, you can get one for free but for limited spacing!
And use the config vars of that database, if you haven't done that already!

Logstash-Forwader 3.1 state file .logstash-forwarder not updating

I am having an issue with Logstash-forwarder 3.1.1 on Centos 6.5 where the state file /.logstash-forwarder is not updating as information is sent to Logstash.
I have found as activity is logged by logstash-forwarder the corresponding offset is not recorded in /.logstash-forwarder 'logrotate' file. The ./logstash-forwarder file is being recreated each time 100 events are recorded but not updated with data. I know the file has been recreated because I changed permissions to test, and permissions are reset each time.
Below are my configurations (With some actual data italicized/scrubbed):
Logstash-forwarder 3.1.1
Centos 6.5
/etc/logstash-forwarder
Note that the "paths" key does contain wildcards
{
"network": {
"servers": [ "*server*:*port*" ],
"timeout": 15,
"ssl ca": "/*path*/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/a/b/tomcat-*-*/logs/catalina.out"
],
"fields": { "type": "apache", "time_zone": "EST" }
}
]
}
Per logstash instructions for Centos 6.5 I have configured the LOGSTASH_FORWARDER_OPTIONS value so it looks like the following:
LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
Below is the resting state of the /.logstash-forwarder logrotate file:
{"/a/b/tomcat-set-1/logs/catalina.out":{"source":"/a/b/tomcat-set-1/logs/catalina.out","offset":433564,"inode":*number1*,"device":*number2*},"/a/b/tomcat-set-2/logs/catalina.out":{"source":"/a/b/tomcat-set-2/logs/catalina.out","offset":18782151,"inode":*number3*,"device":*number4*}}
There are two sets of logs that this is capturing. The offset has stayed the same for 20 minutes while activities have been occurred and sent over to Logstash.
Can anyone give me any advice on how to fix this problem whether it be a configuration setting I missed or a bug?
Thank you!
After more research I found it was announced that Filebeats is the preferred forwarder of choice now. I even found a post by the owner of Logstash-Forwarder that the program is full of bugs and is not fully supported any more.
I have instead moved to Centos7 using the latest version of the ELK stack, using Filbeats as the forwarder. Things are going much smoother now!