Bluemix - How to tell the current stack of my app? - ibm-cloud

On September I received an email saying that the old stack (lucid64) will be removed by the end of October.
The email also said that on Oct 17 (three days ago) Bluemix will initiate an automatic migration to cflinuxfs2.
How do I know if my app was already migrated?
Do I have to run cf push with the -s cflinuxfs2 flag from now on? If yes, until when?
I have executed cf push several times this month but without the -s flag.
Thanks

cf app APP_NAME should show you the stack, e.g.:
$ cf app APP_NAME
Showing health and status for app APP_NAME in org ORG_NAME / space SPACE_NAME as ME#MY_DOMAIN.com...
OK
requested state: started
instances: 2/2
usage: 32M x 2 instances
urls: APP_NAME.APP_DOMAIN.com
last uploaded: Mon Oct 19 02:21:39 UTC 2015
stack: cflinuxfs2
buildpack: go_buildpack
state since cpu memory disk details
#0 running 2015-10-18 07:22:08 PM 2.1% 10.7M of 32M 38.9M of 64M
#1 running 2015-10-18 07:22:07 PM 2.1% 8.4M of 32M 38.9M of 64M
You can see it says stack: cflinuxfs2

Running the following command will show you your application stack.
[07:40:10 ~]$ cf app velocityapp
...
stack: cflinuxfs2
You can re-push your application with the -s flag or the platform will automatically migrate you at a later date.

Related

Armitage 'Connection refused' error in new install of Kali Linux after full upgrade

I installed Kali Linux via VMware and did a full system upgrade:
apt-get update
apt-get upgrade
apt-get full-upgrade
As part of the upgrade postgresql upgraded from v11 to v12. I followed the instructions to finish this part of the upgrade:
pg_dropcluster 12 main --stop
pg_upgradecluster 11 main
pg_dropcluster 11 main
I start postgresql, initialize metasploit, and start Armitage:
/etc/init.d/postgresql start
msfdb init
armitage
The only console output appears unrelated:
Picked up _JAVA_OPTIONS: -Dawt.useSystemAAFontSettings=on
-Dswing.aatext=true
I do get the popup box with the connection information. I found that I get the "Unexpected end of file from server" if I use 'localhost' as the host, so - per their instructions - I change it to the external IP (in this case 192.168.9.134). I checked metasploit-framework/config/database.yml for
the port and login credentials.
After clicking 'Connect' with this information I get a connection window stating:
Connecting to 192.168.9.134:5432 Connection refused (Connection
refused)
There's also the progress bar that over time will completely fill up (unless I click 'Cancel'). After which nothing happens. As I run the command from the terminal I can see that the process is still running (I don't get my prompt back) but the window disappears and Armitage doesn't actually start. The log file, as verified by pg_lsclusters (/var/log/postgresql/postgresql-12-main.log) doesn't is actually empty.
The link I mentioned before suggests that the problem could either be not enough RAM (I set the VM to have 4gb and free -m shows):
total used free shared buff/cache available
Mem: 3964 803 2677 29 483 2787
Swap: 4093 0 4093
Or that the Metasploit RPC daemon never started (that window does come up the first time, but not subsequent times). I verified that it's running via msfdb status:
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; disabled; vendor preset: disabled)
Active: active (exited) since Fri 2020-02-07 16:06:52 EST; 19min ago
Process: 1753 ExecStart=/bin/true (code=exited, status=0/SUCCESS) Main PID: 1753 (code=exited, status=0/SUCCESS)
Feb 07 16:06:52 kali systemd1: Starting PostgreSQL RDBMS... Feb 07
16:06:52 kali systemd1: Started PostgreSQL RDBMS.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME postgres
1735 postgres 3u IPv6 32516 0t0 TCP localhost:5432 (LISTEN)
postgres 1735 postgres 4u IPv4 32517 0t0 TCP localhost:5432
(LISTEN)
UID PID PPID C STIME TTY STAT TIME CMD postgres 1735
1 0 16:06 ? Ss 0:00 /usr/lib/postgresql/12/bin/postgres -D
/var/lib/postgresql/12/main -c
config_file=/etc/postgresql/12/main/postgresql.conf
[+] Detected configuration file
(/usr/share/metasploit-framework/config/database.yml)
Also, running regular Metasploit appears to work fine (msfconsole) and loads without error (not sure if there's any output that would be helpful here). I don't use postgresql directly, so I haven't messed with any configuration nor do I have any other applications (that I'm aware of) that use it, so it should be a pretty clean setup (not to mention this is a fresh install of Kali Linux). I'm out of ideas for what to check next. An online search didn't seem to match this problem well. Any thoughts?
Armitage has been deprecated for some time now, as it has not been updated since 2015, and is (to some extent) incompatible with current versions of metasploit.
Although this may not fix your problem, I suggest not using software this much out of date.

kubernetes service can not send request to itself

I have a service that, in some contexts, sends requests to itself.
I can reach the service from outside the cluster, but the self-requests fail (time-out).
Environment:
minikube v0.34.1
Linux version 4.15.0 (jenkins#jenkins) (gcc version 7.3.0 (Buildroot 2018.05)) #1 SMP Fri Feb 15 19:27:06 UTC 2019
I've been using https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip as a troubleshooting guide, but I'm down the step that says "seek help".
Troubleshooting results:
journalctl -u kubelet | grep -i hairpin
Feb 26 19:57:10 minikube kubelet[3066]: W0226 19:57:10.124151 3066 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Feb 26 19:57:10 minikube kubelet[3066]: I0226 19:57:10.124295 3066 docker_service.go:236] Hairpin mode set to "hairpin-veth"
The troubleshooting guide indicates that "hairpin-veth" is OK.
for intf in /sys/devices/virtual/net/docker0/brif/veth*; do cat $intf/hairpin_mode; done
0
...
0
Note that the guide used /sys/devices/virtual/net/cbr0/brif/*, but in this version of minikube, the path is /sys/devices/virtual/net/docker0/brif/veth*. I'd like to understand why the paths are different, but it appears that hairpin_mode is not enabled.
The next step in the guide is: Seek help if none of above works out.
Am I correct in believing that I need to enable hairpin_mode?
If so, how do I do so?
It seems like known issue, more information here:
As workaround you can try:
minikube ssh -- sudo ip link set docker0 promisc on
Please share with the reulsts.

Where are the Kubernetes kubelet logs located?

I installed Kubernetes on my Ubuntu machine. For some debugging purposes I need to look at the kubelet log file (if there is any such file).
I have looked in /var/logs but I couldn't find a such file. Where could that be?
If you run kubelet using systemd, then you could use the following method to see kubelet's logs:
# journalctl -u kubelet
If you are trying to go directly to the file you can find the kubelet logs in /var/log/syslog directory. This is for ubuntu 16.04 and above.
It depends how it was installed. I installed Kubernetes on some Ubuntu machines following the Docker-MultiNode instructions.
With this install, I find the logs using the logs command like this.
Find your container ID.
$ docker ps | egrep kubelet
Use that container ID to view the logs
$ docker logs `<container-id>`
Finally I could find it in /var/log/upstart directory. Kubernetes in my machine is started using upstart. That's why those log files are in upstart directory
I installed Kubernetes by kind (Kubernetes in docker).
find docker container of kind to enter
$ docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
62588e4d284b kindest/node:v1.17.0 "/usr/local/bin/entr…" 2 weeks ago Up 2 weeks 127.0.0.1:32769->6443/tcp kind2-control-plane
$ docker container exec -it kind2-control-plane bash
root#kind2-control-plane:/#
Inside container kind2-control-plane, you could find logfiles in two place:
/var/log/containers/
/var/log/pods/
And then,you will find they are the same, you can see the example below:
root#kind2-control-plane:/# cat /var/log/containers/redis-master-7db7f6579f-scw95_default_master-f6374281c2c6afcfcd0ee1214d9bd51c1684c0b6c0ba1056295246ecd055563c.log | tail -n 5
2020-04-08T12:09:29.824252114Z stdout F
2020-04-08T12:09:29.824372278Z stdout F [1] 08 Apr 12:09:29.822 # Server started, Redis version 2.8.19
2020-04-08T12:09:29.824440661Z stdout F [1] 08 Apr 12:09:29.823 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
2020-04-08T12:09:29.824459317Z stdout F [1] 08 Apr 12:09:29.823 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
2020-04-08T12:09:29.82446451Z stdout F [1] 08 Apr 12:09:29.824 * The server is now ready to accept connections on port 6379
root#kind2-control-plane:/# cat /var/log/pods/default_redis-master-7db7f6579f-scw95_094824e1-25aa-4e1e-ab23-d4bae861988a/master/0.log | tail -n 5
2020-04-08T12:09:29.824252114Z stdout F
2020-04-08T12:09:29.824372278Z stdout F [1] 08 Apr 12:09:29.822 # Server started, Redis version 2.8.19
2020-04-08T12:09:29.824440661Z stdout F [1] 08 Apr 12:09:29.823 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
2020-04-08T12:09:29.824459317Z stdout F [1] 08 Apr 12:09:29.823 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
2020-04-08T12:09:29.82446451Z stdout F [1] 08 Apr 12:09:29.824 * The server is now ready to accept connections on port 6379
root#kind2-control-plane:/# ls -l /var/log/containers/ | grep redis
lrwxrwxrwx 1 root root 101 Apr 8 12:09 redis-master-7db7f6579f-scw95_default_master-f6374281c2c6afcfcd0ee1214d9bd51c1684c0b6c0ba1056295246ecd055563c.log -> /var/log/pods/default_redis-master-7db7f6579f-scw95_094824e1-25aa-4e1e-ab23-d4bae861988a/master/0.log
If you want to know more in detail about the directories, you can see 2019-2-merge-request in Github.

systemd restart service on watchdog does terminate previous hanged instance

I'm trying to setup systemd service configuration to restart service on watchdog failure. If my application does not call sd_notify() in time, systemd spawns new instance.
However, previus instance is not killed. After some time, I have many instances of my application running.
$ systemctl status my-daemon.service
Loaded: loaded (/lib/systemd/system/my-daemon.service; disabled)
Active: active (running) since Tue, 26 Aug 2014 10:27:46 +0000; 7s ago
Main PID: 1433 (attendance-syst)
CGroup: name=systemd:/system/my-daemon.service
├ 1281 /usr/local/bin/my-daemon
├ 1384 /usr/local/bin/my-daemon
├ 1407 /usr/local/bin/my-daemon
└ 1433 /usr/local/bin/my-daemon
...
This is part of my service file:
[Service]
ExecStart=/usr/local/bin/my-daemon
TimeoutStopSec=5
WatchdogSec=10
Restart=on-failure
How can i configure systemd to kill instances which fails on watchdog?
I have already read manual page but it didn't help me.
I thought Restart=on-failure shall restart hanged process by default...
It's a bug and it's already fixed in newer versions of systemd.
In systemd 208 (available for debian jessie) it works correctly.
In systemd 204 (available for debian wheezy via backports) it's still broken.
I haven't found exact release where they fixed it.

rndc: connect failed: 127.0.0.1#953: connection refused [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
This is a very annoying problem that i am having with the rndc reload
I am getting the following error:
rndc: connect failed: 127.0.0.1#953: connection refused
However the following work fine,
[root#cbgfx ~]# service named restart
Stopping named: . [ OK ]
Starting named: [ OK ]
[root#cbgfx ~]# tail -f /var/log/messages
Aug 7 12:51:09 cbgfx named[31990]: zone 120.88.167.in-addr.arpa/IN: loaded serial 14
Aug 7 12:51:09 cbgfx named[31990]: zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0
Aug 7 12:51:09 cbgfx named[31990]: zone domain.com/IN: domain.com/MX 'mail.servergreek.com' has no address records (A or AAAA)
Aug 7 12:51:09 cbgfx named[31990]: zone domain.com/IN: loaded serial 14
Aug 7 12:51:09 cbgfx named[31990]: zone localhost.localdomain/IN: loaded serial 0
Aug 7 12:51:09 cbgfx named[31990]: zone localhost/IN: loaded serial 0
Aug 7 12:51:09 cbgfx named[31990]: managed-keys-zone ./IN: loaded serial 4
Aug 7 12:51:09 cbgfx named[31990]: zone domain.com/IN: sending notifies (serial 14)
Aug 7 12:51:09 cbgfx named[31990]: zone 120.88.167.in-addr.arpa/IN: sending notifies (serial 14)
Aug 7 12:51:09 cbgfx named[31990]: running
The vps has ipv6 ip address, is there anything i missed here?
Thanks in advance guys
I fixed it myself , it was a permission and ownership issue.To fix it you need to execute those ssh commands
Fix rndc connection refused error
chown root:named /etc/rndc.key
chmod 640 /etc/rndc.key
clear the file of directory /var/cache/bind/ and after in terminal bash
/etc/bind/bind9 restart
The problem might not only be in rndc.key.
The easiest way to detect is running:
service named restart
Check if there is any error, if there is an error, run:
systemctl status named.service
Check any permission denied error. It could be in the log files as well.
In my case as bsentosa comment I needed start process named, you can enable to named start together within system
$ systemctl enable named
I am on Mac OS X (Ventura), with Bind9 installed through Brew. I ran into the same issue. I had to run named with sudo to make this error disappear: It was an ownership issue.
Also, you should pay attention to named logs, sometimes you have just errors in your *.zone file.
I hope it will help Mac users landing here.