Can you run a script file within JBoss Fuse' Karaf Terminal? - jboss

The Karaf* terminal allows some scripting of commands at the prompt. Eg:
($.context bundles) | grep -i felix
I have seen threads that discuss running multi-line scripts, presumably contained in a file.
My question is simply: How does one run a karaf language script file from the terminal? For my application the script can be a local file.
Thanks Very Much
*: JBoss Fuse (6.1.0.rehat-379)

You can use the shell:source command like this:
Here is a sample script.
computer:karaf donald$ cat test.script
bundle:list -t 0 | head
echo 'Hello world 1'
echo 'Hello world 2'
echo 'Hello world 3'
Here is how you would invoke it from karaf:
Cobalt:bin donald$ ./karaf
__ __ ____
/ //_/____ __________ _/ __/
/ ,< / __ `/ ___/ __ `/ /_
/ /| |/ /_/ / / / /_/ / __/
/_/ |_|\__,_/_/ \__,_/_/
Apache Karaf (3.0.2)
Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown Karaf.
karaf#root()> shell:exec pwd
/Users/donald/apache-karaf-3.0.2
karaf#root()> shell:exec ls
LICENSE
NOTICE
README
RELEASE-NOTES
bin
data
demos
deploy
etc
instances
lib
lock
system
test.script
karaf#root()> shell:source test.script
START LEVEL 100 , List Threshold: 0
ID | State | Lvl | Version | Name
-----------------------------------------------------------------------------------------------------------
0 | Active | 0 | 4.2.1 | System Bundle
1 | Active | 5 | 2.2.0 | OPS4J Pax Url - aether:
2 | Active | 5 | 2.2.0 | OPS4J Pax Url - wrap:
3 | Active | 8 | 1.7.4 | OPS4J Pax Logging - API
4 | Active | 8 | 1.7.4 | OPS4J Pax Logging - Service
5 | Active | 10 | 3.0.2 | Apache Karaf :: Service :: Guard
6 | Active | 10 | 1.8.0 | Apache Felix Configuration Admin Service
Hello world 1
Hello world 2
Hello world 3
karaf#root()>

create a file and add all the fuse commands you want to run at one shot
.From the karaf use the command shell:source
Reference

Related

Colab pro gpu doesn't work on local vscode

I tried to use Colab Pro's GPU on my local vscode via colab-ssh by using ngrok.
But as you can see below, when I check nvidia-smi on my local vscode terminal, it didn't show Memory-Usage.
What should I do?
Sat Jul 16 04:08:30 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.48.07 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 |
| N/A 35C P0 29W / 250W | Function Not Found | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

ERROR: (gcloud.run.deploy) argument --set-env-vars: Bad syntax for dict arg

I'm using Cloud Code (extension for Visual Studio Code) and during the deploy, via UI, I'm trying to set the Environment Variables field like this:
KEY1:value1
KEY2:value2,value3
But I'm having this error:
Failed to deploy the app. Error: ERROR: (gcloud.run.deploy) argument --set-env-vars: Bad syntax for dict arg: [value3]. Please see gcloud topic flags-file or gcloud topic escaping for information on providing list or dictionary flag values with special characters. ,Usage: gcloud run deploy [[SERVICE] --namespace=NAMESPACE] [optional flags] optional flags may be --add-cloudsql-instances | --allow-unauthenticated | --args | --async | --binary-authorization | --breakglass | --clear-binary-authorization | --clear-cloudsql-instances | --clear-config-maps | --clear-env-vars | --clear-key | --clear-labels | --clear-post-key-revocation-action-type | --clear-secrets | --clear-vpc-connector | --cluster | --cluster-location | --command | --concurrency | --connectivity | --context | --cpu | --cpu-throttling | --env-vars-file | --help | --image | --ingress | --key | --kubeconfig | --labels | --max-instances | --memory | --min-instances | --namespace | --platform | --port | --post-key-revocation-action-type | --region | --remove-cloudsql-instances | --remove-config-maps | --remove-env-vars | --remove-labels | --remove-secrets | --revision-suffix | --service-account | --set-cloudsql-instances | --set-config-maps | --set-env-vars | --set-secrets | --source | --tag | --timeout | --no-traffic | --update-config-maps | --update-env-vars | --update-labels | --update-secrets | --use-http2 | --vpc-connector | --vpc-egress For detailed information on this command and its flags, run: gcloud run deploy --help
So it seems the comma needs to be escaped. How to do that via Cloud Code UI, please?
If you set the env-var like this: --set-env-var "A=B,C,D" to gcloud, it will treat the comma (,) character as another environment variable declaration and will try to split the value into multiple environment variables. This is explained here in detail.
However, to prevent splitting with commas, you need to specify a different custom delimiter that you’re sure won’t occur in your value string, such as ##:
--set-env-vars "^##^A=B,C,D"
You may also use a format like this as mentioned in the official docs:
--set-env-vars "^#^KEY1=value1,value2,value3#KEY2=..."
I don't think there is a work around here.
We are working to fix this through https://github.com/GoogleCloudPlatform/cloud-code-vscode/issues/560.
For now I could solve that by making "Environment Variables" UI field empty and using a Dockerfile to set the variables, where this syntax works:
ENV KEY1='value1'
ENV KEY2='value2,value3'

Strange results using tramp :session in org-mode

TLTR; If you use Emacs Org-Mode Tramp on Windows with Plink with SSH sesion. It creates strange outputs
Long Text:
I use Emacs Org-Mode. Which is a great tool. And i liked to use in a literate DevOps way. Which is also a great Idea, document your work while you are on it.
You will hate me, i have to use a Windows station #work. So i tested it with Putty plink:
#+NAME: harddisk_worker001.sh
#+BEGIN_SRC sh :dir /plink:worker001:/tmp
df --human-readable --local --exclude-type=tmpfs --exclude-type=overlay | awk '{print $5 "\t" $1}' | (read -r; printf "%s\n" "$REPLY"; sort --reverse)
#+END_SRC
#+RESULTS: harddisk_worker001.sh
| Use% | Filesystem |
| 73% | /dev/mapper/system-lvroot |
| 6% | /dev/mapper/system-lvopt |
| 6% | /dev/mapper/system-lvhome |
| 47% | /dev/sda1 |
| 2% | /dev/mapper/system-lvtmp |
| 27% | /dev/mapper/system-lvvar |
| 0% | devtmpfs |
The Result was great, but i liked to also the :session feature of it, to speed it up:
#+NAME: harddisk_worker001.sh
#+BEGIN_SRC sh :dir /plink:worker001:/tmp :session worker001
df --human-readable --local --exclude-type=tmpfs --exclude-type=overlay | awk '{print $5 "\t" $1}' | (read -r; printf "%s\n" "$REPLY"; sort --reverse)
#+END_SRC
#+RESULTS: harddisk_worker001.sh
| Filesystem |
| /dev/mapper/system-lvroot |
| /dev/mapper/system-lvopt |
| /dev/mapper/system-lvhome |
| /dev/sda1 |
| /dev/mapper/system-lvtmp |
| /dev/mapper/system-lvvar |
| devtmpfs |
Which was not the exspected result! Can you explain why the table differs? I am not able see the root cause of this. Except a bug in the tramp-plink implementation, but i am not sure about that.
Can you replay this?
I don't know too much about org, so I have debugged the resulting Tramp calls. Your first command results in org-babel--shell-command-on-region, which invokes a proper process-file call.
Your second example, with the :session argument, doesn't seem to call any Tramp operation related to processes. So I believe org is trying something internally, which I cannot debug further. Maybe a process invocation which isn't Tramp aware, who knows.
I recommend to write an org fault report.

What code-repository should the Dockerfile get committed to?

Long story short
Where should I commit the Dockerfile? In the project codebase or in the devops codebase?
Reasoning details:
Without docker and without CI
In the ancient times, when developing a complex application with multiple code-bases, one normally wanted to have one repo per project and have all the passwords, credentials and dev/test/pre/prod configurations separated from the code.
+-----------------------------------------------------------------------+
| |
| +---------+ +---------+ +---------+ +---------+ |
| | app-1 | | app-2 | | app-3 | | app-4 | |
| +---------+ +---------+ +---------+ +---------+ |
| |
| +----+ |
| | |\ |
| | +-+ |
| | conf | |
| | files| |
| +------+ |
| |
+-----------------------------------------------------------------------+
In the old-ancient times one sysadmin installed the software in the server and then later copied the config files. Back in the 90's usually the sysop had those files in a directory of his own, shared only with the boss.
With CI but still without docker
Later we improved the cycle: in Continuos development/integration environments, "the system" itself needs to be able to clone all those repos and be able to "build" the applications and configure them to be ready to be run. Then copy the build into the servers and configure them accordingly.
This enables all the developers to trigger deploys at production, still not compromising the secret keys.
Before containers, typically the companies had an extra "devops" (AKA CI repo) where we had all those config files organized and know by an script. The CI server (pre-docker) knows all the source-code repos, knows the destination-network-topology, has the passwords to the cloud, and copies/builds/deploys everything in its destination and also configures it, making unnecessary the human intervention provided the servers are up and running.
+-----------------------------------------------------------------------+
| |
| +---------+ +---------+ +---------+ +---------+ |
| | app-1 | | app-2 | | app-3 | | app-4 | |
| +---------+ +---------+ +---------+ +---------+ |
| |
| +----------------+ |
| | devops | |
| +----------------+ |
| | config-1-devel | |
| | config-1-pre | |
| | config-1-prod | |
| | config-2-devel | |
| | [...] | |
| | config-4-prod | |
| +----------------+ |
| |
+-----------------------------------------------------------------------+
CI with Docker
When it comes to make docker play a role in the equation, I wonder if the correct place to have the Dockerfile is inside the application CVS repository or in the devops repository.
Will the Dockerfile go into the app code-base?
Unless we do an open-source code that needs to run in many platforms, usually the companies establish a target platform and the coders "know" the target system will be an Ubuntu, or a CentOs or so beforehand.
On the other hand it is now that the coders themselves touch the Dockerfile as one moe source-code file. This pushes us to think that the Dockerfile fits in each code-base as the app and the system it runs in will be -probably- coupled by needing certain requirements.
+-----------------------------------------------------------------------+
| |
| +-------------+ +-------------+ +-------------+ +-------------+ |
| | app-1 | | app-2 | | app-3 | | app-4 | |
| +-------------+ +-------------+ +-------------+ +-------------+ |
| |Dockerfile-1 | |Dockerfile-2 | |Dockerfile-3 | |Dockerfile-4 | |
| +-------------+ +-------------+ +-------------+ +-------------+ |
| |
| +----------------+ |
| | devops | |
| +----------------+ |
| | config-1-devel | |
| | config-1-pre | |
| | config-1-prod | |
| | config-2-devel | |
| | [...] | |
| | config-4-prod | |
| +----------------+ |
| |
+-----------------------------------------------------------------------+
Or will the Dockerfile go into the devops code-base (AKA the CI server code-base)?
But also it seems the programmer should do the very same lines of code, for example if he is coding a web application, despite it is run under an apache, an nginx or a caddy server... so the "decission" of the runtime seems it should be coded into the devops code-base:
+-----------------------------------------------------------------------+
| |
| +-------------+ +-------------+ +-------------+ +-------------+ |
| | app-1 | | app-2 | | app-3 | | app-4 | |
| +-------------+ +-------------+ +-------------+ +-------------+ |
| |
| +----------------+ |
| | devops | |
| +----------------+ |
| | Dockerfile-1 | |
| | Dockerfile-2 | |
| | Dockerfile-3 | |
| | Dockerfile-4 | |
| +----------------+ |
| | config-1-devel | |
| | config-1-pre | |
| | config-1-prod | |
| | config-2-devel | |
| | [...] | |
| | config-4-prod | |
| +----------------+ |
| |
+-----------------------------------------------------------------------+
In the team we can't clarify the proper way and I've searched but I am unable to find documentation that demonstrates if the different Dockerfiles should be committed into the app repos or in the devops repo (AKA CI repo).
Where should I commit them?
I would suggest to keep it with your application as it should evolve whith the code base.
IMHO best practice is to keep CI code and config with your app, not in separate repo, so you don't have to manage dependencies between app code version and configurations.
Dockerfile into the app code-base
Maybe if the organization have a few and not standardized applications or there are multiple languages with different strategies in the same company, the Dockerfile should be at repository level to allow direct modifications
But what happen if we are talking about dozens/hundreds of microservices?
In that case, the developer should not modify the Dockerfile because it was developed previously by the architect, technical lead or senior developer
Let's imagine a Dockerfile, entrypoint.sh and other required files that are the base for dozens of applications with same nature like java microservices in the same organization. Here some issues to consider if the Dockerfile is into the code-base:
If you need to change something in this Dockerfile, you will have to change in every git repository.
You must put an extra effort to validate developers don't break this Dockerfile, because is the same for all the others microservices. Can we imagine dozens of microservices, with different Dockerfiles, I mean each of those with its own architecture?
If all microservices needs a common file for docker build, you must put this file in every git repository! Example : ssh-key, tokens, scripts, artifact download keys, etc
Dockerfile into the devops code-base
My advice, based on my dozens of applications, is just what you mentioned. Here some advantages:
Just one Dockerfile for all my microsevices (for example).
If I need to upgrade or fix something, just I need to change one Dockerfile and nothing else.
Developers could see and understand this Dockerfile, but never break it.
C.I Platform
If you choose to put Dockerfile into the devops code-base instead into every git repository in your organization, you must need to develop a flow something like this:
Developer pushes code to git repository
C.I Platform receive the notification
C.I Platform clones the app git repository
C.I Platform clones the devops code base, which contains all of Dockerfile of your organization
C.I Platform determine the nature of app, to select the correct Dockerfile and other files like entry-point.sh, etc
C.I Platform copy the Dockerfile into the app source code root folder.
C.I Platform performs a docker build ...
C.I Platform performs a docker push ... if you have a private docker registry (recommended)
Finally, C.I Platform performs an instantiation (docker run) of the docker image in any of remote server (with docker previously installed)
I could recommend you Jenkins, due to its ease of use
Config Files
I advice you, if possible, do not use complex files at build stage of your applications. Open source technologies are good to do that, but if you are using some proprietary language, you're toast :S
Anyway, if you need config files at build stage, you could use:
https://zookeeper.apache.org to store the plain files as node text for every application
https://github.com/jrichardsz-software-architect-tools/configurator if you just need to centralize configuration variables of your applications
Here some information related to externalization of applications configurations: https://stackoverflow.com/a/51268633/3957754

How do I uninstall all rpms installed today with yum?

I am very familiar with
rpm -qa --last
and have found it to be very handy on certain occasions. However on this occasion I accidentally got a bit overzealous and installed a large yum group.
yum groupinstall "Development tools"
Is there an easy way to uninstall everything I just installed? Seems to me there should be some way to combine rpm query and rpm erase. i.e. piping the output from a query command into the remove command.
Update: based on user #rickhg12hs feedback
It was pointed out that I can see the transaction id with yum history which I did not know about. Here is what that looks like:
$ yum history
Loaded plugins: fastestmirror, security
ID | Login user | Date and time | Action(s) | Altered
----------------------------------------------------------------------------
69 | <jds> | 2015-05-11 01:31 | Install | 1
68 | <jds> | 2015-05-11 01:31 | Install | 1
67 | <jds> | 2015-05-11 01:10 | I, U | 210
66 | <jds> | 2015-05-05 12:41 | Install | 1
65 | <jds> | 2015-04-30 17:57 | Install | 2
64 | <ansible> | 2015-04-30 10:11 | Install | 1
63 | <ansible> | 2015-04-30 10:11 | Install | 1
62 | <ansible> | 2015-04-30 10:11 | Install | 1 EE
61 | <ansible> | 2015-04-30 10:11 | Install | 1
60 | <ansible> | 2015-04-30 10:11 | Install | 1
59 | <ansible> | 2015-04-30 09:58 | Install | 19 P<
58 | <ansible> | 2015-04-29 18:28 | Install | 1 >
57 | <ansible> | 2015-04-29 18:28 | Install | 1
56 | <ansible> | 2015-04-29 18:28 | Install | 9
55 | <ansible> | 2015-04-29 18:28 | Install | 3
54 | <ansible> | 2015-04-29 18:28 | Install | 1
53 | <ansible> | 2015-04-29 18:27 | I, U | 5
52 | <ansible> | 2015-04-29 18:27 | I, U | 4
51 | <ansible> | 2015-04-29 18:27 | Install | 1
50 | <ansible> | 2015-04-29 18:27 | Install | 1
and tada: There it is, a transaction id.
I want to uninstall from transaction id 67. So now that I am a bit wiser I have a new question.
So how can I use the yum or rpm command to uninstall a transaction?
Note: it was also pointed out to me that I can do a
$ yum history info 67 |less
Loaded plugins: fastestmirror, security
Transaction ID : 67
Begin time : Mon May 11 01:10:09 2015
Begin rpmdb : 1012:bb05598315dcb21812b038a356fa06333d277cde
End time : 01:13:25 2015 (196 seconds)
End rpmdb : 1174:cb7855e82c7bff545319c38b01a72a48f3ada1ab
User : <jds>
Return-Code : Success
Command Line : groupinstall Additional Development
Transaction performed with:
Installed rpm-4.8.0-38.el6_6.x86_64 #updates
Installed yum-3.2.29-60.el6.centos.noarch #anaconda-CentOS-201410241409.x86_64/6.6
Installed yum-plugin-fastestmirror-1.1.30-30.el6.noarch #anaconda-CentOS-201410241409.x86_64/6.6
Packages Altered:
Dep-Install GConf2-2.28.0-6.el6.x86_64 #base
Install GConf2-devel-2.28.0-6.el6.x86_64 #base
Dep-Install ORBit2-2.14.17-5.el6.x86_64 #base
... snip ...
I think this could prove quite helpful under certain circumstances.
If you uninstall packages, then you run the risk of removing things that were already there, but happened to be upgraded. As a rule, you should use yum (or equivalent) for managing packages, which allows you to downgrade a package. This would remove new packages, and downgrade existing ones. See for example How to safely downgrade or remove glibc with yum and rpm
Selecting the names of packages to downgrade can be done using the output of rpm -qa, formatted to allow simple selection of the given date. For instance (see CentOS: List the installed RPMs by date of installation/update?), you can list packages in the reverse-order of their install date using
rpm -qa --last
As a more elaborate approach, you can use the --queryformat option with the :date option to format the date exactly as you want (it uses strftime).
In either case, you can make a script to extract the package names from the output of rpm, and use those packages with yum (or even rpm) to manipulate as needed.
When doing a downgrade, there is one odd thing to keep in mind: that revises the install-date for packages to be the current date rather than a complete undo, by using the previous date.
All packages installed in a single transaction have
an identical RPMTAG_INSTALLTID tag value.
Use
rpm -qa --qf '[%{name}\t%{installtid:date}\n]'
to find all packages that were installed as part of the yum group install.
Yum has provision for you to undo your command i.e. yum history undo #blah
In your case, to remove all packages you installed today you can run :
yum history undo 69
yum history undo 68
yum history undo 67