Node Creation Takes too long and failed - ibm-cloud

I installed Blockchain platform v2 beta then I tried to configure it and add nodes.
My Question is:
is there anyone faced long delay in node creation like CA node for example.
I faced this problem and cannot find from where I can check logs.
Notification Error Image:
Note:
the node did not be created till now since 2 days.

Here the link to the official IBP documentation where is explained how to retrieve and visualize logs.
IBM Blockchain Platform - Viewing your node logs
I also suggest you to check if there is any issue in your kubernetes cluster where the IBP is running.

As per the IBM Cloud documentation,
If you use Enterprise Plan networks, you can view component logs in a
text file format. If you use Starter Plan networks, component logs are
gathered by the IBM Cloud Log Analysis service and
you can view the logs in Kibana.
Each component generates logs from different activities. This is
because each component plays different roles within the Hyperledger
Fabric network architecture and transaction flows.
Certificate Authority logs The Certificate Authority manages the
identity of participants within the network. In Certificate Authority
logs, you can find logs from when participants generate public and
private keys to communicate with the network (enroll), or when new
members, peers, or applications register with the Certificate
Authority. You can also use the CA logs to debug if there are any
problems with certificate verification.
So, you should be able to see the logs in the IBM Cloud Log Analysis service. By default, your logs are collected by the Lite Plan of the Log Analysis service. This plan is free and stores your logs for three days before discarding them. It also allows you to search only the first 500 MB of your logs per day. If your network logs exceed 500 MB, you cannot view new logs in Kibana. If your network generates more than 500 MB of logs, or you would like to retain your logs for more than three days, you can upgrade to a paid version of the Log Analysis Service.
For more info, refer the IBM cloud docs here

Related

Container App Environment creation timing out

Where I work has just started migrating to the cloud. We've successfully deployed a number of resources using Terraform and Pipelines into Azure.
Where we are running into issues is deploying a Container App Environment, we have code that was working in a less locked down environment (setup for Proof of Concept), but are now having issues using that code in our go-forward.
When deploying, the Container App Environment spends 30mins attempting to create before it returns a context deadline exceeded error. Looking in Azure Portal, I can see the resource in "Waiting" provisioning state and I can also see the MC_ and AKS resources that get generated. It then fails around 4hrs later.
Any advice?
I am suspecting it's related to security on the Virtual Network that the subnets are sitting on, but I'm not seeing any logs on the deployment to confirm. The original subnets had a Network Security Group (NSG) assigned and I configured the rules that Microsoft provide before I added a couple of subnets without an NSG assigned and no luck.
My next step is to try provisioning it via the GUI and see if that works.
I managed to break our build in the "anything goes" environment.
The root cause is an incomplete configuration of the Virtual Network which has custom DNS entries. This has now been passed to our network architects to resolve. If I can get more details on the fix they apply I'll include that here for anyone else that runs into the issue.

How do I get CPU metrics in Dynatrace API for a service?

Dynatrace has a rest api to get statistics. In the console I can go to a host and view the processes running - for instance weblogic. From there I can drill in and see spring boot web controllers and their CPU utilization as a percentage. These are also available through the "services" section of the admin portal.
Im trying to automate reading these services and their CPU to compare multiple environments. Even though I can see the percentage of the service on the front end dynatrace admin portal I cant seem to find the right api to get the data.
I can see the results at the process level with /api/v2/metrics/query?metricSelector=builtin:tech.generic.cpu.usage but I cant seem to get it at the service level. For instance weblogic is one server and one process group instance but it has multiple web controllers (sometimes several per web application) and I would like to see the CPU usage for each.

Application Performance monitoring on Swisscom Application Cloud

I am investigating options for monitoring our installation in Swisscom's cloud-foundry. My objectives are the following:
monitor performance indicators for deployed application (such as cpu, disk, memory)
monitor performance indicators for services (slow queries, number of queries, ideally also some metrics on hitting quotas)
So far, I understand the options are the following (including some BUTs):
I used a very nice TOP cf-plugin (github)
This works very well. It seems that it registers itself to get the required firehose nozzles and consume data.
That is very useful for tracing / ad-hoc monitoring, but not very good for a serious infrastructure monitoring.
Another way I found is to use firehose-syslog solution.
This can be deployed as an app to (as far as I understand) do the job in similar way, as the TOP cf plugin.
The problem is, that it requires registered client, so it can authenticate with the doppler endpoint. For some reason, the top-cf-plugin does that automatically / in another way.
Last option i am considering is to build the monitoring itself to the App (using a special buildpack)
That can be for example done with Datadog. But it seems to also require a dedicated uaa client to register the Nozzle.
I would like to check, if somebody is (was) on the similar road, has some findings.
Eventually I would like to raise the following questions towards the swisscom community support:
is it possible to register uaac client to be able to ingest events through the firehose nozzle from external service? (this requires admin credentials if I was reading correctly)
is there an alternative way to authenticate with the nozzle (for example using a special user and his authentication token?)
is there any alternative to monitor the CF deployments in Swisscom? Eventually, is there a paper, blogpost or other form of documentation, that would be helpful in this respect (also for other users of AppCloud)?
Since it requires admin permissions, we can not give out UAA clients for the firehose.
However, there are different ways to get metrics in context of a user.
CF API
You can obtain basic metrics of a specific app by polling the CF API:
https://apidocs.cloudfoundry.org/5.0.0/apps/get_detailed_stats_for_a_started_app.html
However, since you have to poll (and for each app), it's not the recommended way.
Metrics in syslog drain
CF allows devs to forward their logs to syslog drains; in more recent versions, CF also sends metrics to this syslog drain (see https://docs.cloudfoundry.org/devguide/deploy-apps/streaming-logs.html#container-metrics).
For example, you could use Swisscom's Elasticsearch service to store these metrics and then analyze it using Kibana.
Metrics using loggregator (firehose)
The firehose allows streaming logs to clients for two types of roles:
Streaming all logs to admins (which requires a UAA client with admin permissions) and streaming app logs and metrics to devs with permissions in the app's space. This is also what the cf logs command uses. cf top also works this way (it enumerates all apps and streams the logs of each app).
However, you will find out that most open source tools that leverage the firehose only work in admin mode, since they're written for the platform operator.
Of course you also have the possibility to monitor your app by instrumenting it (white box approach), for example by configuring Spring actuator in a Spring boot app or by including an agent of your favourite APM vendor (Dynatrace, AppDynamics, ...)
I guess this is the most common approach; we've seen a lot of teams having success by instrumenting their applications. Especially since advanced monitoring anyway requires you to create your own metrics as the firehose provided cpu/memory metrics are not that powerful in a microservice world.
However, option 2. would be worth a try as well, especially since the ELK's stack metric support is getting better and better.

Cloud foundry app status or health notification?

Is there a way to get some notification when a Cloud Foundry application fails or is unreachable? I mean to register to some deployed app and if the status of the application is changed to failed or something, I want to receive a notification.
On Pivotal Cloud Foundry, when a app crashes, an event is emitted thru the firehose.
PCF Metrics tile, available from Pivotal, can be deployed to your PCF foudnation. PCF Metrics will track all events for apps running on the foundation and are accessible to developers (thru Apps Manager). I believe Metrics tile tracks history for up to two weeks. I am not aware of any alerting capabilities in the PCF Metrics tile (I could be wrong, in which case, please correct me), that will prompt you when an app crashes.
Other approaches are to implement event logging tools like Splunk, New Relic etc. They support alerts. You will have to build those.
API monitoring tools like AppD, Apigee, and New Relic provide alerting and can notify you went the response time to an app has degraded (as in your app has crashed). This approach is a little more involved. You may require to add an agent to your buildpack, depending on the tool you choose.
IMHO there is no such built-in feature for Cloud Foundry, but IBM Cloud offers the Availability Monitoring service to monitor apps and send out alerts in case of unavailability or other similar events. The service is part of the DevOps category in the IBM Cloud catalog.
There is also Alert Notification to manage alerts, the notification of the right groups via all kinds of channels and to track the alert status. For your question you should start with the Availability Monitoring and then work towards how those events are handled.
You can use the cf events appname command to get a list of all events about the application, this will print out all the recent events such as application crashes.
if run the cf events appname -v you will see the json rest calls the cf cli makes to Cloud Foundry.
You can use Cloud Foundry Java Client to write you own code to interact with Cloud Foundry.
Another thing you can do is stream your application logs to any syslog compatible log aggregation service for example splunk. Then have splunk monitor for app crash events in the log. You can read how to configure app log streaming at the docs
This functionality is scheduled to be available with PCF Metrics 1.5 and can be seen with PWS (Pivotal Web Services) in Alpha Mode.
The functionality is available under the Monitors Tab inside of PCF Metrics (1.5).
Webhook notifications (i.e. Slack) can be configured for a number of Events (including as you discussed crashes).
You can create a User Provided service and Add a syslog drain URL. And then bind the service to your application. Now in case of any events happening it will put the logs into the URL you have provided.

SSH to bluemix from bosh and capture metrics

Has anyone tried connecting to IBM bluemix using bosh-cli. I am seeing performance issues in my requests and was going through this article on cloud foundry. I am planning to login to ssh to gorouter and monitor go-router CPU utilization.
Can someone recommend any way to capture the following metrics from Bluemix:
CPU utilization
Latency
Requests per second
what do you mean by "connecting to IBM bluemix using bosh-cli"?
When you think about the public available IBM Cloud (formerly Bluemix) that's represented here https://console.bluemix.net/ it's not possible. The bosh cli is to maintain the platform, thus Cloudfoundry and potentially other deployments but not your apps.
If you have a private installation you might check the metrics that the system provides. Infos here https://docs.cloudfoundry.org/running/all_metrics.html
When you want to have metrics about your app I could think off your app is providing these metrics. Or you put something in place like the New Relic monitoring. The have a bunch of application performance monitoring (APM). Info here https://docs.newrelic.com/docs/agents
HP