JupyterLab has long launch delay. Caused by any extension? - jupyter

My JupyterLab has a long launch delay. It takes ~ 20 seconds
from reading config json files
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: [D 2023-01-25 22:37:53.492 ServerApp] Paths used for configuration of jupyter_server_config:
to extension linkage
Jan 25 22:38:11 xinliupitt-prebuilt-20 bash[1625]: [I 2023-01-25 22:38:11.816 ServerApp] jupyterlab | extension was successfully linked.
Detailed logs are here:
Jan 25 22:37:41 xinliupitt-prebuilt-20 systemd[1]: Started Jupyter Notebook Service.
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: [D 2023-01-25 22:37:53.492 ServerApp] Paths used for configuration of jupyter_server_config:
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /etc/jupyter/jupyter_server_config.json
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: [D 2023-01-25 22:37:53.493 ServerApp] Paths used for configuration of jupyter_server_config:
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /usr/local/etc/jupyter/jupyter_server_config.json
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: [D 2023-01-25 22:37:53.493 ServerApp] Paths used for configuration of jupyter_server_config:
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /opt/conda/etc/jupyter/jupyter_server_config.d/beatrix_jupyterlab.json
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /opt/conda/etc/jupyter/jupyter_server_config.d/jupyter-server-proxy-jupyterserverexten>
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /opt/conda/etc/jupyter/jupyter_server_config.d/jupyter_server_mathjax.json
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /opt/conda/etc/jupyter/jupyter_server_config.d/jupyterlab.json
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /opt/conda/etc/jupyter/jupyter_server_config.d/jupyterlab_git.json
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /opt/conda/etc/jupyter/jupyter_server_config.d/jupytext.json
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /opt/conda/etc/jupyter/jupyter_server_config.d/nbclassic.json
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /opt/conda/etc/jupyter/jupyter_server_config.d/nbdime.json
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /opt/conda/etc/jupyter/jupyter_server_config.d/notebook_shim.json
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /opt/conda/etc/jupyter/jupyter_server_config.json
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: [D 2023-01-25 22:37:53.619 ServerApp] Paths used for configuration of jupyter_server_config:
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /home/jupyter/.local/etc/jupyter/jupyter_server_config.json
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: [D 2023-01-25 22:37:53.619 ServerApp] Paths used for configuration of jupyter_server_config:
Jan 25 22:37:53 xinliupitt-prebuilt-20 bash[1625]: /home/jupyter/.jupyter/jupyter_server_config.json
Jan 25 22:38:07 xinliupitt-prebuilt-20 bash[1625]: Matplotlib is building the font cache; this may take a moment.
Jan 25 22:38:11 xinliupitt-prebuilt-20 bash[1625]: [I 2023-01-25 22:38:11.816 ServerApp] jupyterlab | extension was successfully linked.
Jan 25 22:38:11 xinliupitt-prebuilt-20 bash[1625]: [I 2023-01-25 22:38:11.816 ServerApp] jupyterlab_git | extension was successfully linked.
.
.
.
I tried to disable extensions one by one and see which one is the root cause of the delay. For example, I disabled extensions: "jupyter_server_mathjax", "jupyterlab", "jupyterlab_git", etc.
I can disable all extensions successfully (and found they are not root causes) except the "jupyterlab" extension. To disable "jupyterlab" extension, I tried these approaches
modify the content of jupyterlab.json from true to false
delete jupyterlab.json
before jupyter launch stage, use cmd jupyter labextension disable jupyterlab
try any of jupyter launch cmd below
jupyter lab --dev-mode # start JupyterLab in development mode, with no extensions
jupyter lab --core-mode # start JupyterLab in core mode, with no extensions
jupyter lab --app-dir=~/myjupyterlabapp # start JupyterLab with a particular set of extensions
However, none of them successfully disabled "jupyterlab" extension, since finally this log always appears
jupyterlab | extension was successfully linked.
I also tried if "buildCheck": false can disable the extensions.
page_config.json
jupyter_notebook_config.json
jupyter_config.json
No it didn't work; the log still shows "extension was successfully linked."
My questions:
Has anyone experienced JupyterLab long launch delay and how did you resolve it?
Is there any other approach for me to disable the "jupyterlab" extension?
Thanks a lot!

Related

VS code setup for Java remote development

few weeks back vscode made JDK-11 mandatory for java development in VS-code
The Eclipse Platform has decided to require Java 11 as the minimum requirement for its September 2020 release. See https://www.eclipse.org/lists/eclipse-pmc/msg03821.html.
Because vscode-java depends on the Eclipse JDT.LS, the same requirement applies to vscode-java but on a more agressive timeline: vscode-java usually consumes JDT.LS builds that depend on bleeding edge JDT features, effectively shipping pre-release versions of Eclipse Platform/JDT. As of July 22nd, 2020, Java 11 is now required for running vscode-java.
source: vscode
i am using remote ssh extension to connect to my remote VM (ubuntu on vagrant).
when i open Java file in remote-ssh ,i am getting error saying install Java 11.
i have the java 11 already in my vagrant.
i can see the following in my VM
$ update-java-alternatives -l
java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64
java-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64
vagrant#vagrant:/usr/lib/jvm
$ ls
total 24K
drwxr-xr-x 4 root root 4.0K Feb 21 2020 .
drwxr-xr-x 138 root root 4.0K Aug 13 17:57 ..
lrwxrwxrwx 1 root root 25 Feb 20 2019 default-java -> java-1.11.0-openjdk-amd64
lrwxrwxrwx 1 root root 21 Jan 15 2020 java-1.11.0-openjdk-amd64 -> java-11-openjdk-amd64
-rw-r--r-- 1 root root 2.0K Jan 15 2020 .java-1.11.0-openjdk-amd64.jinfo
drwxr-xr-x 7 root root 4.0K Feb 21 2020 java-11-openjdk-amd64
lrwxrwxrwx 1 root root 20 Jan 17 2020 java-1.8.0-openjdk-amd64 -> java-8-openjdk-amd64
-rw-r--r-- 1 root root 2.7K Jan 17 2020 .java-1.8.0-openjdk-amd64.jinfo
drwxr-xr-x 7 root root 4.0K Feb 21 2020 java-8-openjdk-amd64
all my java projects depends on java-8 so just to work with in vscode , i need Java-11.
please help me to setup this Env.
The easiest is to change your java config with the following command:
sudo alternatives --config java
and choose the Java version you want active.

Kafka pod fails to come up after pod deletion with NFS

We were trying to run a Kafka cluster on Kubernetes using NFS provisioner. The cluster came up fine. However when we killed one of the Kafka pods, the replacement pod failed to come up.
Persistent volume before pod deletion:
# mount
10.102.32.184:/export/pvc-ce1461b3-1b38-11e8-a88e-005056073f99 on /opt/kafka/data type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.133.40.245,local_lock=none,addr=10.102.32.184)
# ls -al /opt/kafka/data/logs
total 4
drwxr-sr-x 2 99 99 152 Feb 26 21:07 .
drwxrwsrwx 3 99 99 18 Feb 26 21:07 ..
-rw-r--r-- 1 99 99 0 Feb 26 21:07 .lock
-rw-r--r-- 1 99 99 0 Feb 26 21:07 cleaner-offset-checkpoint
-rw-r--r-- 1 99 99 57 Feb 26 21:07 meta.properties
-rw-r--r-- 1 99 99 0 Feb 26 21:07 recovery-point-offset-checkpoint
-rw-r--r-- 1 99 99 0 Feb 26 21:07 replication-offset-checkpoint
# cat /opt/kafka/data/logs /meta.properties
#
#Mon Feb 26 21:07:08 UTC 2018
version=0
broker.id=1003
Deleting the pod:
kubectl delete pod kafka-iced-unicorn-1
The reattached persistent volume in the newly created pod:
# ls -al /opt/kafka/data/logs
total 4
drwxr-sr-x 2 99 99 180 Feb 26 21:10 .
drwxrwsrwx 3 99 99 18 Feb 26 21:07 ..
-rw-r--r-- 1 99 99 0 Feb 26 21:10 .kafka_cleanshutdown
-rw-r--r-- 1 99 99 0 Feb 26 21:07 .lock
-rw-r--r-- 1 99 99 0 Feb 26 21:07 cleaner-offset-checkpoint
-rw-r--r-- 1 99 99 57 Feb 26 21:07 meta.properties
-rw-r--r-- 1 99 99 0 Feb 26 21:07 recovery-point-offset-checkpoint
-rw-r--r-- 1 99 99 0 Feb 26 21:07 replication-offset-checkpoint
#cat /opt/kafka/data/logs/meta.properties
#
#Mon Feb 26 21:07:08 UTC 2018
version=0
broker.id=1003
We see the following error in the Kafka logs:
[2018-02-26 21:26:40,606] INFO [ThrottledRequestReaper-Produce], Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2018-02-26 21:26:40,711] FATAL [Kafka Server 1002], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.io.IOException: Invalid argument
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1012)
at kafka.utils.FileLock.<init>(FileLock.scala:28)
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:104)
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:103)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:103)
at kafka.log.LogManager.<init>(LogManager.scala:65)
at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:648)
at kafka.server.KafkaServer.startup(KafkaServer.scala:208)
at io.confluent.support.metrics.SupportedServerStartable.startup(SupportedServerStartable.java:102)
at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:49)
[2018-02-26 21:26:40,713] INFO [Kafka Server 1002], shutting down (kafka.server.KafkaServer)
[2018-02-26 21:26:40,715] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
The only way around this seems to be to delete the persistent volume claim and force delete the pod again. Or alternatively use another storage provider than NFS (rook is working fine in this scenario).
Has anyone come across this issue with NFS provisioner?

varnish 4.1 default.vcl permissions denied

When I'm trying to add magento 2 varnish.vcl file by creating a symbolic link, varnish service stop working with error permission denied, while if I use default varnish configuration file varnish works smooth.
My Stack is ubuntu 16.04, varnish 4.1
ls -al
drwxr-xr-x 2 root root 4096 Mar 21 13:14 .
drwxr-xr-x 96 root root 4096 Mar 21 12:56 ..
lrwxrwxrwx 1 root root 44 Mar 21 13:14 default.vcl -> /var/www/bazaar/varnish.vcl
-rw-r--r-- 1 root root 1225 Aug 22 2017 default.vcl_bak
-rw-r--r-- 1 root root 37 Mar 21 12:56 secret
here is the status for varnish service
● varnish.service - Varnish HTTP accelerator
Loaded: loaded (/lib/systemd/system/varnish.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/varnish.service.d
└─customexec.conf
Active: failed (Result: exit-code) since Wed 2018-03-21 13:59:08 UTC; 2s ago
Docs: https://www.varnish-cache.org/docs/4.1/
man:varnishd
Process: 3093 ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m (code=exited, status=2)
Main PID: 3093 (code=exited, status=2)
Mar 21 13:59:08 bazaar systemd[1]: Stopped Varnish HTTP accelerator.
Mar 21 13:59:08 bazaar systemd[1]: Started Varnish HTTP accelerator.
Mar 21 13:59:08 bazaar varnishd[3093]: Error: Cannot read -f file (/etc/varnish/default.vcl): Permission denied
Mar 21 13:59:08 bazaar systemd[1]: varnish.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Mar 21 13:59:08 bazaar systemd[1]: varnish.service: Unit entered failed state.
Mar 21 13:59:08 bazaar systemd[1]: varnish.service: Failed with result 'exit-code'.
my current user for nginx is bazaar
and permissions for varnish.vcl is as follow
-rw-r--r-- 1 bazaar bazaar 7226 Mar 21 13:24 varnish.vcl
Any hint or help will be highly appreciated.
Thanks.
It is likely that the user (vcache) does not have access to read in the parent directory(s) /var/www/bazaar.

Kubernetes API server connection refused

I am trying to setup Kubernetes cluster using the instruction at https://coreos.com/kubernetes/docs/latest/getting-started.html.
I am in the step 2 (Deploy master) where when I start the master service, the master service is in active status but it cannot communicate with the API server. Also, there are 6 containers started but the logs are empty. Please find the kubelet log below:
Jan 26 07:54:18 kubernetes-1.novalocal systemd[1]: Started kubelet.service.
Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: W0126 07:54:20.214551 1115 server.go:585] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead.
Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: W0126 07:54:20.214631 1115 server.go:547] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults.
Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.217269 1115 plugins.go:71] No cloud provider specified.
Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.219217 1115 manager.go:128] cAdvisor running in container: "/system.slice/kubelet.service"
Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.672952 1115 fs.go:108] Filesystem partitions: map[/dev/vda9:{mountpoint:/ major:254 minor:9 fsType: blockSize:0} /dev/vda3:{mountpoint:/usr major:254 minor:3 fsType: blockSize:0} /dev/vda6:{mountpoi
Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.856238 1115 manager.go:163] Machine: {NumCores:2 CpuFrequency:1999999 MemoryCapacity:4149022720 MachineID:5a493caa9327449cabd050ac6cd2e065 SystemUUID:5A493CAA-9327-449C-ABD0-50AC6CD2E065 BootID:541d
Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.858067 1115 manager.go:169] Version: {KernelVersion:4.3.3-coreos-r2 ContainerOsVersion:CoreOS 899.5.0 DockerVersion:1.9.1 CadvisorVersion: CadvisorRevision:}
Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.862564 1115 server.go:798] Adding manifest file: /etc/kubernetes/manifests
Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.862655 1115 server.go:808] Watching apiserver
Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:21.165506 1115 plugins.go:56] Registering credential provider: .dockercfg
Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: E0126 07:54:21.171563 1115 kubelet.go:2284] Error updating node status, will retry: error getting node "192.168.111.32": Get http://127.0.0.1:8080/api/v1/nodes/192.168.111.32: dial tcp 127.0.0.1:8080: connection r
Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: E0126 07:54:21.172329 1115 kubelet.go:2284] Error updating node status, will retry: error getting node "192.168.111.32": Get http://127.0.0.1:8080/api/v1/nodes/192.168.111.32: dial tcp 127.0.0.1:8080: connection r
Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: E0126 07:54:21.173114 1115 kubelet.go:2284] Error updating node status, will retry: error getting node "192.168.111.32": Get http://127.0.0.1:8080/api/v1/nodes/192.168.111.32: dial tcp 127.0.0.1:8080: connection refused
Also, the following are the containers launched.
2bf275350996 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-se" 26 minutes ago Up 26 minutes k8s_controller-manager-elector.5b0f7cea_kube-podmaster-192.168.111.32_kube-system_3b8350635fe89ab366063da0be8969fd_1f370f8c
c64042286744 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-se" 26 minutes ago Up 26 minutes k8s_scheduler-elector.bc3d71be_kube-podmaster-192.168.111.32_kube-system_3b8350635fe89ab366063da0be8969fd_c9ecb387
81bd74d0396a gcr.io/google_containers/hyperkube:v1.1.2 "/hyperkube proxy --m" 26 minutes ago Up 26 minutes k8s_kube-proxy.176f5569_kube-proxy-192.168.111.32_kube-system_8a987aa8c76c4d76bd80ccff5b65ffea_840d8228
39494ed8e814 gcr.io/google_containers/pause:0.8.0 "/pause" 27 minutes ago Up 27 minutes k8s_POD.6d00e006_kube-podmaster-192.168.111.32_kube-system_3b8350635fe89ab366063da0be8969fd_36b73b1d
632dc0a2f612 gcr.io/google_containers/pause:0.8.0 "/pause" 27 minutes ago Up 27 minutes k8s_POD.6d00e006_kube-apiserver-192.168.111.32_kube-system_86819bf93f678db0ee778b8c8bb658dc_815c6627
361b297b37f9 gcr.io/google_containers/pause:0.8.0 "/pause" 27 minutes ago Up 27 minutes k8s_POD.6d00e006_kube-proxy-192.168.111.32_kube-system_8a987aa8c76c4d76bd80ccff5b65ffea_7a6182ed
These are trying to talk to the insecure version of the API, which shouldn't work between machines. That will only work on the master. Additionally, the master isn't set up to accept work (register_node=false), so it is not expected to report back its status.
The key piece of info we're missing, what machine did that log come from?
Did you set the MASTER_HOST= parameter correctly?
The address of the master node. In most cases this will be the publicly routable IP of the node. Worker nodes must be able to reach the master node(s) via this address on port 443.
Also, note this section of the docs:
Note that the kubelet running on a master node may log repeated attempts to post its status to the API server. These warnings are expected behavior and can be ignored. Future Kubernetes releases plan to handle this common deployment consideration more gracefully.

MAMP PRO 3x use a symbolic link for htdocs

I'm trying to setup MAMP pro 3 and I have a symbolic link from my Documents folder to /Application/MAMP/htdocs:
lrwxr-xr-x 1 msteudel admin 33 Jun 24 15:11 htdocs -> /Users/msteudel/Documents/wwwroot
In the options for my host I have FollowSymlink checked (I have tried all sorts of various combinations of options from all of them to just symlink):
Screenshot of settings
I'm still getting in my apache error log:
[Tue Jun 24 15:15:00 2014] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /Applications/MAMP/htdocs
And in the browser getting:
403 Forbidden
You don't have permission to access / on this server.
This all used to work when I was just using the free MAMP.
I tried changing the permissions of the symlink but that didn't work. The group is Admin whereas MAMP might be expecting Staff? I'm not sure that's a problem... I'm on a Mac in case someone missed that.
I also checked that all the folders were set to at least 755...
drwxr-xr-x 5 root admin 170 Oct 23 2013 Users
drwxr-xr-x+ 72 msteudel staff 2448 Jun 24 14:43 msteudel
drwxr-xr-x+ 87 msteudel staff 2958 Jun 23 14:22 Documents
drwxrwxrwx 69 msteudel staff 2346 Jun 23 11:18 wwwroot