How can I have istio gateway traces in OpenTelemetry? - trace

I'll be really gratefull if someone have a documentation or experience with this implementation and can share it. If there are unknown information from my side, please tell me to share it, because i dont know what is needed to describe my entire situation.
The version are Istio 1.61.1 and OtelCollector 0.68.0
I'm trying to get traces from the istio gateway and send to OpenTelemetry.
I'm using openCensus agent and following this documentation as guide (distributed-tracing/opencensusagent).
Unfortunately i cannot succeed, because i get traces on OpenTelemetry log, but the traces has no relation with request made across the gateway.
The objective is to get all the request chain including the istio gateway, to watch the entire time response.

Related

Sending email from a kubernetes pod

I’ve built a service that lives in a Docker container. As part of it’s required behavior, when receiving a gRPC request, it needs to send an email as a side effect. So imagine something like
service MyExample {
rpc ProcessAndSendEmail(MyData) returns (MyResponse) {}
}
where there’s an additional emission (adjacent to the request/response pattern) of an email message.
On a “typical” server deployment, I might have a postfix running ; if I were using a service, I’d just dial it’s SMTP endpoint. I don’t have either readily available in this case.
As I’m placing my service in a container and would like to deploy to kubernetes, I’m wondering what solutions work best? There may be a simple postfix-like Docker image I can deploy... I just don’t know.
There's several docker mailservers:
https://github.com/docker-mailserver/docker-mailserver
https://github.com/Mailu/Mailu
https://github.com/bokysan/docker-postfix
Helm charts:
https://github.com/docker-mailserver/docker-mailserver-helm
https://github.com/Mailu/helm-charts
https://github.com/bokysan/docker-postfix/tree/master/helm
Note that the top answer in this reddit thread recommends signing up for a managed mail provider instead of trying to self-host your own.

How do I Re-route Ghost Blog Admin URL without modifying the API Address?

Ghost blog platform has a setting that allows you to change the admin panel login location (which starts as: https://whateveryoursiteis.com/ghost). Methodology / docs for changing that setting can be found here: https://ghost.org/docs/config/#admin-url
However — when using the above methodology the API Url that is used for Search etc etc is ALSO modified meaning all requests to the ghost API will also be forwarded to the alternate domain (not just the admin access).
My question is — what is the best way to achieve a redirect of the admin URL to a different Domain / protocol while allowing the API url used by Ghost to remain the same?
More background.
We are running ghost on top of GKE (Google Kubernetes Engine) on a Multi-Region Ingress which allows us to dump our CloudSQL DB down to a SQLite file and then build that database into our production Docker Containers which are then deployed to the different Kubernetes nodes that are fronted by the GCE-Ingress load balancer.
Since we need to rebuild that database / container on content change (not just on code change) we need to have a separate Admin URL backed by Cloud SQL where we can persist / modify our data which then triggers the rebuild on our Ci pipeline via Ghost Webhooks.
Another related question might be:
Is it possible to use standard ghost redirects (created via: https://docs.ghost.org/concepts/redirects/) to redirect the admin panel URL (ie. https://whateveryoursiteis.com/ghost) to a different domain (ie. https://youradminsite.com/ghost)?
Another Related GKE / GCE-Ingress Question:
Is it possible to create 301 redirects natively using Kuberentes GCE-Ingress on GKE without adding an nGinx container etc?
That will be my first attempt after posting this — but I figured either way maybe it helps another ghost platform fan down the line someplace — I will attempt to respond back as I find answers to those questions (assuming someone doesn't beat me to it!).
Regarding your question if it's possible to create 301 redirects without adding a nginx container, I can suggest to use istio, find out more information about traffic routing here.
OK. So as it turns out the Ghost team currently has things setup to point API connections at the Admin URL. So if you change your Admin URL expect your clients to attempt to connect to that URL.
I am going to be raising the potential of splitting these off as a feature request over on the ghost forums (as soon as I get out from under pre-launch hell on the current project).
Here's the official Ghost response:
What is referred as 'official docker image' is not something that we
as a Ghost team support.
The APIs are indeed hosted under the same URL as the admin and that's
by design and not really a bug. Introducing configuration options for
each API Ghost instance hosts would be a feature and should be
discussed at our forum first 👍 I think it's a nice idea to be able to
serve APIs from different host, but it's not something that is within
our priorities at the moment.
In case you need more granular handling of admin site, you could
introduce those on your proxy level and for example, handle requests
that are coming to /ghost/api with a different set of rules.
See the full discussion over here on the TryGhost GitHub:
https://github.com/TryGhost/Ghost/issues/10441#issuecomment-460378033
I haven't looked into what it would take to implement the feature but the suggestion on proxying the request could work... if only I didn't need to run on GKE Multi region (which requires use of GCE-Ingress which doesn't have support for redirection hah!). This would be relatively easy to solve the nGinx ingress.
Hopefully this helps someone — I will update as I work through the process. As of now I solved it by dumping my GCP CloudSQL database down to a SQLite db file during build time (thereby allowing me to keep my admin instance clean and separate from the API endpoint — which for me remains the same URL).

UI console to browse topics on Message Hub

I have a Message Hub instance on Bluemix, and am able to produce / consume messages off it. I was looking for a quick, reasonable way to browse topics / messages to see what's going on. Something along the lines of kafka-topics-ui.
I installed kafka-topics-ui locally, but could not get it to connect to Message Hub. I used the kafka-rest-url value from the MessageHub credentials in the kafka-topics-ui configuration file (env.js), but could not figure out where to provide the API key.
Alternatively, in the Bluemix UI, under Kibana, I can see log entries for creating the topic. Unfortunately, I could not see log entries for messages in the topic (perhaps I'm looking the wrong place or have wrong filters?).
My guess is I'm missing something basic. Is there a way to either:
configure a tool such as kafka-topics-ui to connect to MessageHub,
or,
browse topic messages easily?
Cheers.
According to Using the Kafka REST API on Bluemix you need an additional header in all API requests:
-H "X-Auth-Token: APIKEY"
A quick solution is to edit the topic-ui code and include your token in every request. Another solution would be to use a Chrome plugin that can inject the above header. For a more formal solution, i have opened a ticket on github

WSO2 Carbon 404 Error Redirection for Webapp Deployment?

We are using WSO2 Carbon 4.2.0 through the WSO2 Application Server (AS) package. In replacing an older, highly customized Carbon installation (provided by a company that no longer supports the product, has abandoned it and refuses to work on it, and left us no details on how/what they modified in Carbon), we have deployed a couple web applications in the webapps container as they were deployed before in the older instance. We have changed our WebContextRoot in the carbon.xml from the default "/" to a sub-URL of ex: "/stuff", as is also detailed in the self-answered SO question here. However the answer given there is not detailed in what the OP actually encountered when he modified his WSO2 instance.
In testing the above configuration we noticed that if a user were to go to a non-existent web address on the server, depending on the format of the URL they are either:
redirected to a blank page;
receive a "500 Internal server error" (I suspect this is the embedded Tomcat?);
get sent to the Carbon login page (which we definitely do not want to happen for security reasons); or
get an XML document stating:
<faultString> The service cannot be found for the endpoint reference (EPR) /stuff/services/nonexistantservicename </faultString>
At least in the case of missing content we wish the user to be sent to a standardized 404 error page, or at the least be sent an HTTP 404 error by the server. For services the XML error is palatable, we can deal with that.
The only option for us right now to circumvent this issue is to place a proxy in front of the WSO2 instance, which would be another layer to manage and tune, and possibly degrade performance. Please know that I am not a programmer but just an admin with DevOps experience. I would not know how to handle this with e.g. a Java solution or re-coding parts of WSO2. Customizing the core product would also hamper future upgrades of WSO2, a scenario we are trying to dig ourselves out of now as detailed above. Is there no internal WSO2 mechanism to handle non-existent content? Can we not redirect any errors to a standard canned response page?

Trouble adding a new service

I have followed the instructions at https://github.com/cloudfoundry/oss-docs/tree/master/vcap/adding_a_system_service and copied the echo service and created my new service. (That document is somewhat out-of-date in that "excluded components" no longer exists.
In any case, my service shows up as running with a gateway and a node when I look at 'vcap status' on the server. However, when I look at 'vmc services' from the client my service is not in the list. Where is this list maintained and why is my service not on the list?
Various services, including blob, filesystem, mongodb, etc, are shown on the 'vcm services' list even though they have never been included in my config. Where is this maintained and why are other services on this list?
The cloud_controller.log file shows a "Create service request:" for echo every minute. This service is not in my config file (it was once but it was removed and I repeated the deployment). What is prompting this request for a service that was not defined in the config?
The _gateway.log for my service shows the following:
INFO -- Sending info to cloud controller: ...api.vcap.me/services/v1/offerings
INFO -- Fetching handles from cloud controller .../offerings/.../handles
ERROR -- Failed registering with cloud controller, status=400
DEBUG -- [GaaS-Provisioner] Connected to node mbus..
ERROR -- Failed fetching handles, status=404
Why does my gateway fail to register with the cloud controller? I have found some reports that suggest that the problem is with domain name mapping. I have verified that the server can find itself:
$curl api.vcap.me
Welcome to VMware's Cloud Application Platform
What can I do to register my service?
You can also try asking your question on the vcap_dev google group.
https://groups.google.com/a/cloudfoundry.org/forum/?fromgroups#!forum/vcap-dev
They are focused in answering and discussing OSS subjects for Cloud Foundry!
If you follow the document correctly things should work just fine. I understand that the mechanism for maintaining the excluded list of components has changed and can be a point of confusion when following the steps mentioned in the article (just ignore that step totally).
ERROR -- Failed registering with cloud controller, status=400
Well this is a point of worry. I recently followed the article step by step and was able to add a new service.
Is the echo service showing up in vmc services?
Have you copied the the yml files for node and gateway at ./cloudfoundry/.deployments/devbox/config?
Are the tokens for your gateway unique? and matching in the two files? ./cloudfoundry/.deployments/devbox/config/cloud_controller.yml and ./cloudfoundry/.deployments/devbox/config/**_gateway.yml**
I would recommend that you first concentrate on getting the echo service to be listed in the vmc services output. Once done with this you should replicate the steps (with absolute care to modify things like the token) to get your custom service working.
Cheers,
Ankit
You should follow this guide
It work to me.
regards.