Jupyterhub is installed on a server that I am responsible for administering. I would like all users to be restricted to only using Jupyterhub (their notebooks) for 12 hours per day.
I would appreciate your assistance in this matter.
What is its configuration?
Related
I have a cluster with 2 servers that are in HA, there is some configuration so that when I make a change for example in the password of a user or change of role, etc. the change is made immediately on the 2 servers?
The problem is that a user's password is changed and it does not update on the other server immediately, the same happens when a user is assigned a role mapping, it never updates on both servers, only when the server is reboot
OS: Linux (ubuntu 16.04)
keycloak version: 11.0
Thanks for the help
I can't tell from the first paragraph of your question whether Keycloak has ever propagated the changes to the other servers or not.
Does the setup you use usually propagate changes?
If not it sounds like there are issues with your cluster setup.
Do the nodes discover each other? You can check the logs on startup, there is a good illustration on the Keycloak blog on how to check this.
In general it would be a good idea to look over the recommended clustering setup in the docs.
You could change number of owners in the cluster, that way both nodes own the data before it is put in the database. Might help if the issue is that the changes are not immediate enough.
I installed Blockchain platform v2 beta then I tried to configure it and add nodes.
My Question is:
is there anyone faced long delay in node creation like CA node for example.
I faced this problem and cannot find from where I can check logs.
Notification Error Image:
Note:
the node did not be created till now since 2 days.
Here the link to the official IBP documentation where is explained how to retrieve and visualize logs.
IBM Blockchain Platform - Viewing your node logs
I also suggest you to check if there is any issue in your kubernetes cluster where the IBP is running.
As per the IBM Cloud documentation,
If you use Enterprise Plan networks, you can view component logs in a
text file format. If you use Starter Plan networks, component logs are
gathered by the IBM Cloud Log Analysis service and
you can view the logs in Kibana.
Each component generates logs from different activities. This is
because each component plays different roles within the Hyperledger
Fabric network architecture and transaction flows.
Certificate Authority logs The Certificate Authority manages the
identity of participants within the network. In Certificate Authority
logs, you can find logs from when participants generate public and
private keys to communicate with the network (enroll), or when new
members, peers, or applications register with the Certificate
Authority. You can also use the CA logs to debug if there are any
problems with certificate verification.
So, you should be able to see the logs in the IBM Cloud Log Analysis service. By default, your logs are collected by the Lite Plan of the Log Analysis service. This plan is free and stores your logs for three days before discarding them. It also allows you to search only the first 500 MB of your logs per day. If your network logs exceed 500 MB, you cannot view new logs in Kibana. If your network generates more than 500 MB of logs, or you would like to retain your logs for more than three days, you can upgrade to a paid version of the Log Analysis Service.
For more info, refer the IBM cloud docs here
The Cloud Composer documentation explicitly states that:
Due to an issue with the Kubernetes Python client library, your Kubernetes pods should be designed to take no more than an hour to run.
However, it doesn't provide any more context than that, and I can't find a definitively relevant issue on the Kubernetes Python client project.
To test it, I ran a pod for two hours and saw no problems. What issue creates this restriction, and how does it manifest?
I'm not deeply familiar with either the Cloud Composer or Kubernetes Python client library ecosystems, but sorting the GitHub issue tracker by most comments shows this open item near the top of the list: https://github.com/kubernetes-client/python/issues/492
It sounds like there is a token expiration issue:
#yliaog this is an issue for us, as we are running kubernetes pods as
batch processes and tracking the state of the pods with a static
client. Once the client object is initialized, it does no refresh, and
therefore any job that takes longer than 60 minutes will fail. Looking
through python-base, it seems like we could make a wrapper class that
generates a new client (or refreshes the config) every n minutes, or
checks status prior to every call (as #mvle suggested). The best fix
would be in swagger-codegen, but a temporary solution would probably
be very useful for a lot of people.
- #flylo, https://github.com/kubernetes-client/python/issues/492#issuecomment-376581140
https://issues.apache.org/jira/browse/AIRFLOW-3253 is the reason (and hopefully, my fix will be merged soon). As the others suggested, this affects anyone using the Kubernetes Python client with GCP auth. If you are authenticating with a Kubernetes service account, you should see no problem.
If you are authenticating via a GCP service account with gcloud (e.g. using the GKEPodOperator), you will generally see this problem with jobs that take longer than an hour because the auth token expires after an hour.
There are more insights here too.
Currently, long-running jobs on GKE always eventually fail with a 404 error (https://bitbucket.org/snakemake/snakemake/issues/932/long-running-jobs-on-kubernetes-fail). We believe that the problem is in the Kubernetes client, as we determined that although _refresh_gcp_token is being called when the token is expired, the next API call still fails with a 404 error.
You can see here that Snakemake uses the kubernetes python client.
I have MongoDB running on AWS. The setup has been up and running for over several months. The application server(server is also on AWS) has been running the same code for several months.
I received a below alert notification January 5 2017 from Amazon.
Then i can't use MongoDB. the mongodb started failing, i tried to change inboud 27017 port in Security Group. But the result was same.
Notification from Amazon is very principle message. So i don't know what i do exactly.
We are aware of malware that is targeting unauthenticated MongoDB databases which are accessible to the Internet and not configured according to best practices. Based on your security group settings, it appears that your EC2 instances, listed below, may be running a MongoDB database which is accessible to the Internet. If you are not using MongoDB, you may safely ignore this message.
We suggest you utilize EC2 Security Groups to limit connections to your MongoDB databases to only addresses or other security groups that should have access.
...
According to Bluemix dashboard, total number of service instances I permitted to use is 4, but my colleagues' are 10. What cause this difference?
Is there a way to increase the total number of service instances on Bluemix?
Number of service instances depends on your org quota.
Type cf quotas in terminal to find out quotas available.
cf org your-org-name would show your current quota.
It sounds like your are on Trial, Check with your colleague on how he got more quota than yours.
If you think that you are assigned incorrect quota , please contact Bluemix project office team to investigate the discrepancy : http://ibm.biz/bluemixsupport --> Select ID+Login
cf tool can be downloaded from here : https://github.com/cloudfoundry/cli#downloads