Cygnus Fiware docker Error - fiware-cygnus

I have downloaded the git repository to my server and I have followed the steps in Readme file located at docker folder:
docker-compose -f ./docker/0.compose.jar-compiler.yml -p cygnus run --rm compiler
docker build -f ./docker/Dockerfile -t fiware/cygnus .
docker-compose -f ./docker/docker-compose.yml up
However I'm getting a Java error when I'm try to run the last command
/docker-compose.yml up
Recreating docker_cygnus_1...
Attaching to docker_cygnus_1
cygnus_1 | + exec /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.95-2.6.4.0.el7_2.x86_64/jre/bin/java -Xmx20m -Dflume.root.logger=DEBUG,console -cp '/flume/conf:/flume/lib/*:/flume/plugins.d/cygnus/lib/*' -Djava.library.path= com.telefonica.iot.cygnus.nodes.CygnusApplication -f flume/conf/agent_0.conf -n cygnusagent
cygnus_1 | flume/bin/cygnus-flume-ng: line 232: /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.95-2.6.4.0.el7_2.x86_64/jre/bin/java: No such file or directory
Can you help me out on this?

I did replicate your steps, this is the output
[ root: fiware-cygnus ]# docker-compose -f ./docker/docker-compose.yml up
Creating docker_cygnus_1
Attaching to docker_cygnus_1
cygnus_1 | + exec /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.95-2.6.4.0.el7_2.x86_64/jre/bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp '/flume/conf:/flume/lib/*:/flume/plugins.d/cygnus/lib/*' -Djava.library.path= com.telefonica.iot.cygnus.nodes.CygnusApplication -f flume/conf/agent_0.conf -n cygnusagent
cygnus_1 | SLF4J: Class path contains multiple SLF4J bindings.
cygnus_1 | SLF4J: Found binding in [jar:file:/flume/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
cygnus_1 | SLF4J: Found binding in [jar:file:/flume/plugins.d/cygnus/lib/cygnus-0.12.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
cygnus_1 | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
cygnus_1 | 16/02/05 17:19:56 ERROR nodes.CygnusApplication: A fatal error occurred while running. Exception follows. Details=The specified configuration file does not exist: /flume/conf/agent_0.conf
docker_cygnus_1 exited with code 0
You need to add the file /flume/conf/agent_0.conf with adecquate configuration.

Related

Keycloak 20.0.2: Error when exporting realm config for keycloak within a docker container

Very similarly to Error when importing realm config for keycloak within a docker container, I'm running keycloak in docker-compose, using the image quay.io/keycloak/keycloak:20.0.2 and postgreSql.
I'd like to export the whole Keycloak's data.
The following command:
docker run `
-it `
--rm `
-v ${PWD}/keycloak-data:/export `
-e LOG_LEVEL=INFO `
-e KC_DB_URL_HOST=<containerName> `
-e KC_DB_URL_PORT=5432 `
-e KC_DB_URL_DATABASE=<dbName> `
-e KC_DB_USERNAME=<userName> `
-e KC_DB_PASSWORD=<password> `
--network <network> `
quay.io/keycloak/keycloak:20.0.2 `
export --realm <realmName> --dir /export
seems to correctly connect to the db, but I keep getting the following error:
ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (import_export) mode
The error occurs both while the Keycloak server is running (with the docker-compose up command), and when it is stopped and removed (though, the postgreSQL is running, of course!)
How can the Keycloak data be exported?
The error reported in the question
ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler]
(main) ERROR: Failed to start server in (import_export) mode
seems not to be relevant to the purpose of exporting data.
Configuring a proper bind-mounted path in the docker-compose.yaml file for the keycloak container and calling the proper command should do the job:
volumes:
- ./myLocalPath:/export
then, perform the export, using the original container:
docker exec `
-it `
-e LOG_LEVEL=INFO `
-e KC_DB_URL_HOST=<containerName> `
-e KC_DB_URL_PORT=5432 `
-e KC_DB_URL_DATABASE=<dbName> `
-e KC_DB_USERNAME=<userName> `
-e KC_DB_PASSWORD=<password> `
<containerName> `
/opt/keycloak/bin/kc.sh export --realm <realmName> --dir /export
The exported data will then be available in local folder ./myLocalPath.

Elastic Beanstalk failed to unzip source file with UTF-8 file name

I was trying to deploy a NextJS application to Elastic Beanstalk via eb deploy. But the source bundle failed to unzip during deployment as the source bundle contained some pre-built .next page which the file name is in UTF-8 encoding. The error is stated as below.
2022/xx/xx xx:xx:xx.xxxxxx [INFO] Executing instruction: StageApplication
2022/xx/xx xx:xx:xx.xxxxxx [INFO] extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/
2022/01/31 04:56:44.300483 [INFO] Running command /bin/sh -c /usr/bin/unzip -q -o /opt/elasticbeanstalk/deployment/app_source_bundle -d /var/app/staging/
2022/01/31 04:56:45.932820 [ERROR] An error occurred during execution of command [app-deploy] - [StageApplication]. Stop running the command. Error: Command /bin/sh -c /usr/bin/unzip -q -o /opt/elasticbeanstalk/deployment/app_source_bundle -d /var/app/staging/ failed with error exit status 50. Stderr:error: cannot create /var/app/staging/.next/server/pages/\u6e2c\u8a66/\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66.html
File name too long
error: cannot create /var/app/staging/.next/server/pages/\u6e2c\u8a66/\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66.json
File name too long
I was able to unzip the file with option -O UTF-8, is there any way I could add this flag to the eb deploy unzip process?
edit 1. I am working with the Platform 64bit Amazon Linux 2/5.4.9
Not sure if it is good practice, but I eventually added an ebextensions to overcome the original unzipping flow.
commands:
command backup original zip:
command: |
logger "backup zip" && cp /opt/elasticbeanstalk/deployment/app_source_bundle /tmp/app_source_bundle_bak &&
logger "rm existing zip .next folder" && zip -Ad /opt/elasticbeanstalk/deployment/app_source_bundle ".next/*"
cwd: /home/ec2-user
ignoreErrors: false
container_commands:
replace the original zip to staging:
command: |
logger "custom unzip" &&
unzip -O UTF-8 -q -o /tmp/app_source_bundle_bak -d /var/app/staging/
cwd: /home/ec2-user
ignoreErrors: false

Why does kubectl cp command terminates with exit code 126?

I am trying to copy files from the pod to local using following command:
kubectl cp /namespace/pod_name:/path/in/pod /path/in/local
But the command terminates with exit code 126 and copy doesn't take place.
Similarly while trying from local to pod using following command:
kubectl cp /path/in/local /namespace/pod_name:/path/in/pod
It throws the following error:
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "tar": executable file not found in $PATH: unknown
Please help through this.
kubectl cp is actually a very small wrapper around kubectl exec whatever tar c | tar x. A side effect of this is that you need a working tar executable in the target container, which you do not appear to have.
In general kubectl cp is best avoided, it's usually only good for weird debugging stuff.
kubectl cp requires the tar to be present in your container, as the help says:
!!!Important Note!!!
Requires that the 'tar' binary is present in your container
image. If 'tar' is not present, 'kubectl cp' will fail.
Make sure your container contains the tar binary in its $PATH
An alternative way to copy a file from local filesystem into a container:
cat [local file path] | kubectl exec -i -n [namespace] [pod] -c [container] "--" sh -c "cat > [remote file path]"
Useful command to copy the file from pod to local
kubectl exec -n <namespace> <pod> -- cat <filename with path> > <filename>
For me the cat worked like this:
cat <file name> | kubectl exec -i <pod-id> -- sh -c "cat > <filename>"
Example:
cat file.json | kubectl exec -i server-77b7976cc7-x25s8 -- sh -c "cat > /tmp/file.json"
Didn't need to specify namespace since I run the command from a specific project, and since we have one container, didn't need to specify it

MongoDB Docker container: ERROR: Cannot write pid file to /tmp/tmp.aLmNg7ilAm: No space left on device

I started a MongoDB container like so:
docker run -d -p 27017:27017 --net=cdt-net --name cdt-mongo mongo
I saw that my MongoDB container exited:
0e35cf68a29c mongo "docker-entrypoint.s…" Less than a second ago Exited (1) 3 seconds ago cdt-mongo
I checked my Docker logs, I see:
$ docker logs 0e35cf68a29c
about to fork child process, waiting until server is ready for connections.
forked process: 21
2018-01-12T23:42:03.413+0000 I CONTROL [main] ***** SERVER RESTARTED *****
2018-01-12T23:42:03.417+0000 I CONTROL [main] ERROR: Cannot write pid file to /tmp/tmp.aLmNg7ilAm: No space left on device
ERROR: child process failed, exited with error number 1
Does anyone know what this error is about? Not enough space in the container?
I had to delete old Docker images to free up space, here are the commands I used:
# remove all unused / orphaned images
echo -e "Removing unused images..."
docker rmi -f $(docker images --no-trunc | grep "<none>" | awk "{print \$3}") 2>&1 | cat;
echo -e "Done removing unused images"
# clean up stuff -> using these instructions https://lebkowski.name/docker-volumes/
echo -e "Cleaning up old containers..."
docker ps --filter status=dead --filter status=exited -aq | xargs docker rm -v 2>&1 | cat;
echo -e "Cleaning up old volumes..."
docker volume ls -qf dangling=true | xargs docker volume rm 2>&1 | cat;
We've experienced this problem recently while using docker-compose with mongo and a bunch of other services. There are two fixes which have worked for us.
Clear down unused stuff
# close down all services
docker-compose down
# clear unused docker images
docker system prune
# press y
Increase the image memory available to docker - this will depend on your installation of docker. On Mac, for example, it defaults to 64Gb and we doubled it to 128Gb via the UI.
We've had this problem in both Windows and Mac and the above fixed it.

Apache Flink - org.apache.flink.client.program.ProgramInvocationException

I have created an application with Apache FLink 1.0.3 using Scala 2.11.7 and I want to test it locally (a single jvm). So I did the following as stated in the website:
./bin/start-local.sh
tail log/flink-*-jobmanager-*.log
And it starts just fine, I can see the web interface at localhost:8081.
Then, I tried to submit my application, but I get either an exception or a weird message. For example when I type either of the following commands:
./bin/flink run ./myApp.jar
./bin/flink run ./myApp.jar -c MyMain
./bin/flink run ./myApp.jar -c myMain.class
./bin/flink run ./myApp.jar -c myMain.scala
./bin/flink run ./myApp.jar -c my.package.myMain
./bin/flink run ./myApp.jar -c my.package.myMain.class
./bin/flink run ./myApp.jar -c my.package.myMain.scala
I get the following exception:
------------------------------------------------------------
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: Neither a 'Main-Class', nor a 'program-class' entry was found in the jar file.
at org.apache.flink.client.program.PackagedProgram.getEntryPointClassNameFromJar(PackagedProgram.java:571)
at org.apache.flink.client.program.PackagedProgram.<init>(PackagedProgram.java:188)
at org.apache.flink.client.program.PackagedProgram.<init>(PackagedProgram.java:126)
at org.apache.flink.client.CliFrontend.buildProgram(CliFrontend.java:922)
at org.apache.flink.client.CliFrontend.run(CliFrontend.java:301)
at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1192)
at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1243)
And when I type either of the following commands:
./bin/flink run ./ -c myMain myApp.jar
./bin/flink run ./ -c myMain.class myApp.jar
./bin/flink run ./ -c myMain.scala myApp.jar
./bin/flink run ./ -c my.package.myMain myApp.jar
./bin/flink run ./ -c my.package.myMain.class myApp.jar
./bin/flink run ./ -c my.package.myMain.scala myApp.jar
I get the following error:
JAR file is not a file: .
Use the help option (-h or --help) to get help on the command.
The above commands do not work either with -c or --class. I use IntelliJ and I compiled the application using the Build Module from Dependencies option. What am I doing wrong?
The correct way to submit your JAR is:
bin/flink run -c my.package.myMain myApp.jar
You have to specify the arguments (like -c) before the JAR file. You got the error messages initially, because ./ was interpreted as the JAR and the rest of the line was ignored.
The -p argument is optional. Your last example works, because the argument order is correct and not because of the parallelism flag.
I figured out what was wrong. Flink needed to pass the parallelism degree as an argument, otherwise there was a program invocation exception. The command below worked for me:
./bin/flink run -p2 --class myMain myApp.jar
You have to mention the entry point class in your pom file.
see the following part in the pom file snippety
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>com.xyz.myMain</mainClass>
</transformer>
</transformers>
Please check the below snippet.