HELM chart for Apache degraded in argocd - kubernetes

I have tried using a HELM chart repo for Apache via ArgoCD but it shows up degraded in the console but is in sync. I am using the Kubernetes cluster on docker desktop for Mac (M1 chip). Not quite sure what the issue is.
The helm chart details are:
REPO URL = https://charts.bitnami.com/bitnami
CHART = apache:9.0.1
When I check the logs I see the output below:
macbookpro#argo-cd % argocd app logs apache
apache 23:11:06.35
apache 23:11:06.37 Welcome to the Bitnami apache container
apache 23:11:06.39 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-apache
apache 23:11:06.41 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-apache/issues
apache 23:11:06.43
apache 23:11:06.44 INFO ==> ** Starting Apache setup **
/usr/bin/realpath: /bitnami/apache/conf: No such file or directory
apache 23:11:06.71 INFO ==> ** Apache setup finished! **
apache 23:11:06.81 INFO ==> ** Starting Apache **
[Tue Jan 25 23:11:06.943934 2022] [core:emerg] [pid 1] (95)Operation not supported: AH00023: Couldn't create the mpm-accept mutex
(95)Operation not supported: could not create accept mutex
AH00015: Unable to open logs

Posting comment as the community wiki answer for better visibility:
That does appear like an incorrect architecture issue. Please refer to GitHub Issue

Related

"SchemaRegistryException: Failed to get Kafka cluster ID" for LOCAL setup

I'm downloaded the .tz (I am on MAC) for confluent version 7.0.0 from the official confluent site and was following the setup for LOCAL (1 node) and Kafka/ZooKeeper are starting fine, but the Schema Registry keeps failing (Note, I am behind a corporate VPN)
The exception message in the SchemaRegistry logs is:
[2021-11-04 00:34:22,492] INFO Logging initialized #1403ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2021-11-04 00:34:22,543] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
[2021-11-04 00:34:22,614] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer)
[2021-11-04 00:35:23,007] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException: Failed to get Kafka cluster ID
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1488)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:166)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:71)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
at io.confluent.rest.Application.configureHandler(Application.java:271)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:245)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
Caused by: java.util.concurrent.TimeoutException
at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1486)
... 7 more
My schema-registry.properties file has bootstrap URL set to
kafkastore.bootstrap.servers=PLAINTEXT://localhost:9092
I saw some posts saying its the SchemaRegistry unable to connect to the KafkaCluster URL because of the localhost address potentially. I am fairly new to Kafka and basically just need this local setup to run a git repo that is utilizing some Topics/Kafka so my questions...
How can I fix this (I am behind a corporate VPN but I figured this shouldn't affect this)
Do I even need the SchemaRegistry?
I ended up just going with the Docker local setup inside, and the only change I had to make to the docker compose YAML was to change the schema-registry port (I changed it to 8082 or 8084, don't remember exactly but just an unused port that is not being used by some other Confluent service listed in the docker-compose.yaml) and my local setup is working fine now

Starting Codeready Container with libvirt cause "format of backing image was not specified in the image metadata"

I'm trying to use CRC for testing Openshift 4 on my laptop (Ubuntu 20). CRC version 1.17 doesn't support Virtualbox virtualizazion so following the setup instructions
https://access.redhat.com/documentation/en-us/red_hat_codeready_containers/1.17/html/getting_started_guide/installation_gsg
i'm using libvirt, but when i start the cluster with crc start it launch following error
INFO Checking if oc binary is cached
INFO Checking if podman remote binary is cached
INFO Checking if goodhosts binary is cached
INFO Checking minimum RAM requirements
INFO Checking if running as non-root
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking if libvirt 'crc' network is available
INFO Checking if libvirt 'crc' network is active
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists
INFO Starting CodeReady Containers VM for OpenShift 4.5.14...
ERRO Error starting stopped VM: virError(Code=55, Domain=18, Message='Requested operation is not valid: format of backing image '/home/claudiomerli/.crc/cache/crc_libvirt_4.5.14/crc.qcow2' of image '/home/claudiomerli/.crc/machines/crc/crc.qcow2' was not specified in the image metadata (See https://libvirt.org/kbase/backing_chains.html for troubleshooting)')
Error starting stopped VM: virError(Code=55, Domain=18, Message='Requested operation is not valid: format of backing image '/home/claudiomerli/.crc/cache/crc_libvirt_4.5.14/crc.qcow2' of image '/home/claudiomerli/.crc/machines/crc/crc.qcow2' was not specified in the image metadata (See https://libvirt.org/kbase/backing_chains.html for troubleshooting)')
I have not experiences with libvirt so i'm stuck on that and online i'm not finding anything...
Thanks
There is an issue with the crc_libvirt_4.5.14 image. The easiest way to fix it is to do a
qemu-img rebase -f qcow2 -F qcow2 -b /home/${USER}/.crc/cache/crc_libvirt_4.5.14/crc.qcow2 /home/${USER}/.crc/machines/crc/crc.qcow2
Now, if you try to do a crc start, you going to face a "Permission denied" error, which is related to Apparmor, unless you whitelisted your home directory. If you don't want to hack around with Apparmor settings, the /var/lib/libvirt/images supposed to be whitelisted. Move the image to there:
sudo mv /home/${USER}/.crc/machines/crc/crc.qcow2 /var/lib/libvirt/images
then edit the virtual machine settings pointing to the new image location: virsh edit crc , then replace the <source file='/home/yourusername/.crc/machines/crc/crc.qcow2'/> to <source file='/var/lib/libvirt/images/crc.qcow2'/>.
Then do the crc start and... that's it.
The relevant Github issues to follow:
https://github.com/code-ready/crc/issues/1596
https://github.com/code-ready/crc/issues/1578

Openshift 3.11 cloud integration fails with lookup RequestError: send request failed\\ncaused by: Post https://ec2.eu-west-.amazonaws.com

Following the docs: https://docs.openshift.com/container-platform/3.11/install_config/configuring_aws.html#aws-cluster-labeling
Configuring the cloud integration after the cluster build.
When the cluster services are restarted on the masters it fails looking up AWS instances:
22 16:32:10.112895 75995 server.go:261] failed to run Kubelet: could not init cloud provider "aws": error finding instance i-0c5cbd50923f9c6d2: "error listing AWS instances: \"Request.service: main process exited, code=exited, status=255/n/a Error: send request failed\\ncaused by: Post https://ec2.eu-west-.amazonaws.com/: dial tcp: lookup ec2.eu-west-.amazonaws.com: no such host\""
On closer inspection seems to be due to incorrect hostname:
https://ec2.eu-west-.amazonaws.com/ VS https://ec2.eu-west-2.amazonaws.com/
So I double checked the config, which seems to be correct:
# cat /etc/origin/cloudprovider/aws.conf
[Global]
Zone = eu-west-2
Had a google and it seems to be a similar issue to this:
https://github.com/kubernetes-sigs/kubespray/issues/4345
Is there a way to work around this? Moving off 3.11 isn't an option right now.
Thanks.
Looks as though it needs to be zone, rather than the region.
# cat /etc/origin/cloudprovider/aws.conf
[Global]
Zone = eu-west-2a

Can no longer deploy to Bluemix Rules Engine Service

When I originally set up my Rules Engine service in Bluemix, I could deploy from my Eclipse Juno environment just fine. I just tried to deploy a new project this morning, and I got the following error in the deployment report in Eclipse:
ilog.rules.res.model.IlrAlreadyExistException: Unknown RuleApp: /RefillRulesApp/1.0. at
com.ibm.rules.res.internal.MutableRepositoryRESTAdapter.addRuleApp(MutableRepositoryRESTAdapter.java:86)
at
com.ibm.rules.decisionservice.internal.RESClient$3.execute(RESClient.java:332)
at
com.ibm.rules.decisionservice.internal.RESClient$3.execute(RESClient.java:1)
at
com.ibm.rules.decisionservice.internal.RESClient.safeInvokeRES(RESClient.java:132)
at
com.ibm.rules.decisionservice.internal.RESClient.deploy(RESClient.java:299)
at
com.ibm.rules.decisionservice.internal.DsResRestClient.deploy(DsResRestClient.java:168)
at
com.ibm.rules.studio.model.decisionservice.impl.Server.deploy(Server.java:310)
at
com.ibm.rules.decisionservice.DsRuleAppDeployManager.deploy(DsRuleAppDeployManager.java:38)
at
com.ibm.rules.decisionservice.DsDeployManager.deploy(DsDeployManager.java:88)
at
com.ibm.rules.studio.decisionservice.SDsXOMDeploymentJob.deploy(SDsXOMDeploymentJob.java:203)
at
com.ibm.rules.studio.decisionservice.SDsRuleAppDeploymentJob.deployRuleApp(SDsRuleAppDeploymentJob.java:101)
at
com.ibm.rules.studio.decisionservice.SDsRuleAppDeploymentJob.deploy(SDsRuleAppDeploymentJob.java:65)
at
com.ibm.rules.studio.decisionservice.SDsXOMDeploymentJob.runInWorkspace(SDsXOMDeploymentJob.java:81)
at
org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:38)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:53)**
I checked the RES console server log and there isn't any untoward messages in it
The Decision Server version infomation looks like this:
Version: Decision Server 8.7.0.1 , Decision Engine 1.10.0 Patch level: Build #2 on 2015-03-13 16:54:27 Release status: COMMERCIAL
Persistence Type: datasource (DB2/LINUXX8664 SQL10070) Startup
Time: Jan 29, 2016 4:17:18 PM GMT-05:00 Last Update Time: Feb 2, 2016
3:01:23 PM GMT-05:00
I checked for updates to the Eclipse plugin, and it looks like I am up to date.
If I check in the Explorer in the RES console, I can see that it partially deployed:
Deploy Picture
Notice how the rule app is greyed-out.
Any ideas? Thanks...
I found that if I deploy the RuleApp from a 'Rule project for Decision Service', I get the same error. Can you deploy it from a RuleApp project which references a 'Standard Rule Project'? That should fix the issue.

ATG Commerce v11 CRS install Error:

I have installed Oracle ATG v11 with the commerce reference store, when I startup the production server and go to the url domain/crs/storeus I see the blank white page, and have the following error in the console:
Oct 13, 2014 1:56:37 PM com.endeca.infront.site.SiteManager getSite
SEVERE: Unable to retrieve site definition for site id: /storeSiteUS
com.endeca.store.exceptions.PathNotFoundException: No node found at
path: [pages].
at com.endeca.store.configuration.InternalNode.getNode(InternalNode.java:153)
at com.endeca.store.configuration.InternalNode.getNodeInfo(InternalNode.java:221)
at com.endeca.store.configuration.InternalNode.getNode(InternalNode.java:150)
at com.endeca.store.configuration.InternalNode.getNode(InternalNode.java:61)
........................................
**** Error Mon Oct 13 13:00:47 +00:00 2014 1413205247448 /atg/endeca/assembler/droplet/InvokeAssembler A problem occurred
assembling the content for content item /content/Web/Home Pages. The
response received was {#type=ContentSlot,
atg:currentSiteProductionURL=/crs/storeus,
canonicalLink=com.endeca.infront.cartridge.model.NavigationAction#2b35e9c6,
ruleLimit=1, #error=com.endeca.infront.content.ContentException:
com.endeca.navigation.ENEConnectionException: Error establishing
connection to retrieve Navigation Engine request
http://localhost:15000/graph?node=0&profiles=sitegroup.siteGroupUS|NoPriceRange|site.storeSiteUS&offset=0&nbins=0&irversion=640'.
Tried all: '2' addresses, but could not connect over HTTP to server:
'localhost', port: '15000' Check MDEX Logs and specified query
parameters. , contentCollection=/content/Web/Home Pages}. Servicing
the error open parameter.
I am assuming this error is related to endeca? I have downloaded CAS, Tools And Frameworks with experience manager and MDX, and Platform Services. Do I need to start these or have I missed a part of the endeca install?
The value of the configurationPath attribute in the DefaultFileStoreFactory.properties located at \localconfig\atg\endeca\assembler\cartridge\manager may be incorrect.
In OOTB CRS, we normally provide the following value for configurationPath attribute :
/ToolsAndFrameworks/11.1.0/server/workspace/state/repository/CRS
Could you please verify the .zip is present at path provided in DefaultFileStoreFactory.properties.
Just check if you are able to connect the below url:
host:15000/admin?op=stats
If you are able to connect this URL, then MDEX is running. Also, you can login to the experience manager and check if the dgraphs and dgidx are running.
If you are not able to connect then check all the services are(tools and http) running and accessible. You can check the endeca logs to debug further.
Your DGraph is not (yet) started.
(Hit this URL in your browser and verify: http://localhost:15000/graph?node=0&profiles=sitegroup.siteGroupUS|NoPriceRange|site.storeSiteUS&offset=0&nbins=0&irversion=640&format=xml)
Possible reasons are:
You did not run baseline update from ATG (from
ProductCatalogSimpleIndexingAdmin dyn/admin component).
You did not run promote content (from your Endeca App's control folder).
Your Services are not working properly (or not started at all). Check that Platform Services and Tools And Frameworks are started.
The solution is to properly define the value for the property configurationPath=E:/Endeca/Apps/CRS/data/workbench/application_export_archive/CRS in "DefaultFileStoreFactory.properties"
If you are using the OS as Windows then define this path as Unix style as shown above.