I am running a kubernetes cluster on GKE. I have been told that Kubernetes API server comes integrated with the Swagger UI and the UI is a friendly way to explore the apis. However, I am not sure how to enable this on my cluster. Any guidance is highly appreciated. Thanks!
I've researched a bit regarding your question, and I will share with you what I discovered.
This feature is not enabled by default on every Kubernetes installation and you would need to enable the swagger-ui through the flag enable-swagger-ui and I believe this was what you where looking for.
--enable-swagger-ui Enables swagger ui on the apiserver at /swagger-ui.
The issue is that I believe it is not enabled for Google Kubernetes engine and the master node in Google Kubernates Engine does not serve any request for this resource and the port appears to be close and since it is managed I believe it cannot be enabled.
However according to documentation the master should expose a series of resources giving you the possibility to access the API documentation and render them with the tool you prefer. This is the case and the following files are available:
https://master-ip/swagger.json (you can get the master IP running $ kubectl cluster-info)
{"swagger": "2.0",
"info": {
"title": "Kubernetes",
"version": "v1.9.3"
},
"paths": {
"/api/": {
"get": {
...
https://master-ip/swaggerapi
{"swaggerVersion": "1.2",
"apis": [
{
"path": "/version",
"description": "git code version from which this is built"
},
{
"path": "/apis",
"description": "get available API versions"
},
...
According to this blog post from Kuberntes you could make use of this file:
From kuber-apiserver/swagger.json. This file will have all enabled GroupVersions routes and models and would be most up-to-date file with an specific kube-apiserver. [...] There are numerous tools that works with this spec. For example, you can use the swagger editor to open the spec file and render documentation, as well as generate clients; or you can directly use swagger codegen to generate documentation and clients. The clients this generates will mostly work out of the box--but you will need some support for authorisation and some Kubernetes specific utilities. Use python client as a template to create your own client.
Related
Apparently, to use the log4j-spring-cloud-config-client with Spring Cloud Config, you need to take advantage of the SearchPathLocator functionality to pull the raw file based on a specific URI. From the
Spring-cloud-config code it appears only the JGitEnvironmentRepository and NativeEnvironmentRepository implement that interface and offer that functionality.
Running locally, if I hit the following endpoint, I get back a raw log4j2 config file: http://localhost:8088/config-server-properties-poc/default/master/log4j2.xml.
When I try that with an S3 backend, I get a 404, and it doesn't try to search for that specific file. I was able to work around this by naming my file to log4j2-default.json (XML is not supported). When I hit the following URL, I can get my properties back but not in the correct format
http://localhost:8088/log4j2/default
Format
{
"name": "log4j2",
"profiles": ["default"],
"label": null,
"version": null,
"state": null,
"propertySources": [{
"name": "log4j2",
"source": {
"configuration.appenders.appender[0].PatternLayout.Pattern": "${logging_pattern}",
"configuration.appenders.appender[0].name": "Console",
"configuration.appenders.appender[0].target": "SYSTEM_OUT",
"configuration.appenders.appender[0].type": "Console",
"configuration.loggers.Root.AppenderRef.ref": "Console",
"configuration.loggers.Root.level": "info",
"configuration.loggers.logger[0].AppenderRef.ref": "Console",
"configuration.loggers.logger[0].additivity": "false",
"configuration.loggers.logger[0].level": "info",
"configuration.loggers.logger[0].name": "com.paychex",
"configuration.loggers.logger[1].AppenderRef.ref": "Console",
"configuration.loggers.logger[1].additivity": "false",
"configuration.loggers.logger[1].level": "info",
"configuration.loggers.logger[1].name": "com.paychex.traceability",
"configuration.loggers.logger[2].AppenderRef.ref": "Console",
"configuration.loggers.logger[2].level": "WARN",
"configuration.loggers.logger[2].name": "org.apache.catalina.startup.VersionLoggerListener",
"configuration.properties.property[0].name": "logging_pattern",
"configuration.properties.property[0].value": "%d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX},severity=%p,thread=%t,logger=%c,%X,%m%n",
"configuration.properties.property[1].name": "traceability_logging_pattern",
"configuration.properties.property[1].value": "%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ},severity=%p,thread=%t,logger=%c,%X,%m%n"
}
}
]
}
As you can see, the properties are wrapped into the Spring Environment object, and the properties are pushed into a Map, so peeling this apart and getting log4j2 to parse it would be tricky.
Has anyone gotten the log4j client to work with a non-git backend?
You are correct. Log4j's support for Spring Cloud Config relies on SCC's support for serving plain text files.
The latest Spring Cloud Config documentation indicates that plain text support via urls onlys work for Git, SVN, native and AWS S3 but that for S3 to work Spring Cloud AWS must be included in the Config Server. This issue indicates support for serving plain text files from S3 appears to have been added in Spring Cloud Config 2.2.1.Release which was published in Dec 2019. There is still an open issue to add support for a vault backend.
Log4j's support for SCC was added in the 2.12.0 release in June 2019 when SCC did not yet support AWS S3. I have only tested it with native for unit/functional testing and Git since that is the backend my employer uses. However, according to the documentation if you can get SCC to serve plain text with an AWS backend then Log4j should work as well as all it does is query SCC via URLs.
I want to extend a custom OData service created in a S/4HANA system. I added a Cloud Connector to my machine, but I don't know how to go from there. The idea is that I want people to access the service from SCP and that I don't need multiple accounts accessing the service on the S/4 system, but just the one coming from SCP. Any ideas?
Ok I feel silly doing this but it seems to work. My test is actually inconclusive because I don't have a cloud connector handy, but it works proxy-ing google.
I'm still thinking about how to make it publicly accessible. There might be people with better answers than this.
create the cloud connector destination.
make a new folder in webide
create file neo-app.json.
content:
{
"routes": [{
"path": "/google",
"target": {
"type": "destination",
"name": "google"
},
"description": "google"
}],
"sendWelcomeFileRedirect": false
}
path is the proxy in your app, so myapp.scp-account/google here. the target name is your destination. I called it just google, you'll put your cloud connector destination.
Deploy.
My test app with destination google going to https://www.google.com came out looking like this. Paths are relative so it doesn't work but google seems proxied.
You'll still have to authenticate etc.
Hello comunnity i need some help, i have my odata service already running and i have an url like this:
https://myclient:port/sap/opu/odata/SAP/servicename_SRV/MaterialListSet
This is my config, which I suppose is wrong.
manifest.json
"dataSources": {
"invoiceRemote": {
"uri": "https://myclient:port/sap/opu/odata/SAP/servicename_SRV/",
"type": "OData",
"settings": {
"odataVersion": "2.0"
}
}
}
.
.
.
"models": {
...
"invoice": {
"dataSource": "invoiceRemote"
}
}
I get these two errors:
Failed to load resource: the server responded with a status of 401 (Unauthorized)
and
Failed to load https://client:port/sap/opu/odata/SAP/odata_SRV/$metadata?sap-language=ES: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:port' is therefore not allowed access. The response had HTTP status code 401.
This line is not good;
"uri": "https://myclient:port/sap/opu/odata/SAP/servicename_SRV/",
This is because you have to use relative URLs, so it should be
"uri": "/sap/opu/odata/SAP/servicename_SRV/",
The reason behind that is simple: your customer for sure has more than one SAP Gateway/Fiori system. So you shouldn't hard code the domain of your development or production system.
Assuming you will eventually deploy your UI5 application to the SAP NetWeaver system, then that system will contain both the oData service AND the UI5 application. And as they will be hosted in the same server, relative URLs will work just fine.
However inside Web IDE this is not enough because if you use relative URLs than SAP Cloud/Web IDE will understand that you are trying to access a resource in the cloud.
That is why you should add/change your neo-app.json file inside your UI5 project. If you have it already than just change it. If you do not have this file inside your project yet, you can easily create it by right-clicking in the project name and choosing New >> HTML5 Application Descriptor. This will create this file in the root of your project. (outside the webapp folder usually present).
Finally, you will have to add a route in this neo-app.json file, like this
{
"path": "/sap/opu/odata",
"target": {
"type": "destination",
"name": "NAME_OF_YOUR_SAP_CLOUD_DESTINATION",
"entryPath": "/sap/opu/odata"
},
"description": "SAP Gateway System"
}
This tells Web IDE to forward every request to a different system under the destionation specified.
This will only work if you have in place an SAP Cloud Connector linking your SAP Cloud account with your SAP NetWeaver on premise system.
Whilst I've tried several solutions to related problems on SO, nothing appears to fix my problem when deploying a Meteor project to a VM on Google Compute Engine.
I setup mupx to handle the deployment and don't have any apparent issues when running
sudo mupx deploy
My mup.json is as follows
{
// Server authentication info
"servers": [
{
"host": "104.199.141.232",
"username": "simonlayfield",
"password": "xxxxxxxx"
// or pem file (ssh based authentication)
// "pem": "~/.ssh/id_rsa"
}
],
// Install MongoDB in the server, does not destroy local MongoDB on future setup
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: If nodeVersion omitted will setup 0.10.36 by default. Do not use v, only version number.
"nodeVersion": "0.10.36",
// Install PhantomJS in the server
"setupPhantom": true,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (No spaces)
"appName": "simonlayfield",
// Location of app (local directory)
"app": ".",
// Configure environment
"env": {
"ROOT_URL": "http://simonlayfield.com"
},
// Meteor Up checks if the app comes online just after the deployment
// before mup checks that, it will wait for no. of seconds configured below
"deployCheckWaitTime": 30
}
When navigating to my external IP in the browser I can see the Meteor site template however the Mongodb data isn't showing up.
http://simonlayfield.com
I have set a firewall rule up on the VM to allow traffic through port 27017
Name: mongodb
Description: Allow port 27017 access to http-server
Network: default
Source filter: Allow from any source (0.0.0.0/0)
Allowed protocols and ports: tcp:27017
Target tags: http-server
I've also tried passing the env variable MONGO_URL but after several failed attempts I found this post on the Meteor forums suggesting that it is not required when using a local Mongodb database.
I'm currently connecting to the VM using ssh rather than the gcloud SDK but if it will help toward a solution I'm happy to set that up.
I'd really appreciate it if someone could provide some guidance on how I can know specifically what is going wrong. Is the firewall rule I've setup sufficient? Are there other factors than need to be considered when using a Google Compute Engine VM specifically? Is there a way for me to check logs on the server via ssh to gain extra clarity around a connection/firewall/configuration problem?
My knowledge in this area is limited and so apologies if there's an easy fix that has evaded me.
Thanks in advance.
There were some recent meteord updates, please rerun your deployment
Also as a side note: I always specify a port for mup / mupx files
"env": {
"PORT": 5050,
"ROOT_URL": "http://youripaddress"
},
I am new to chef and have been struggling to find best practices on how to configure application configuration in an environment cookbook [source #1].
The environment cookbook I'm working on should do the following:
Prepare the node for a custom application deployment by creating directories, users, etc. that are specific for this deployment only.
Add initialization and monitoring scripts specific for the application deployment.
Define the application configuration settings.
This last responsibility has been a particularly tough nut to crack.
An example configuration file of an application deployment might look as follows:
{
"server": {
"port": 9090
},
"session": {
"proxy": false,
"expires": 100
},
"redis": [{
"port": 9031,
"host": "rds01.prd.example.com"
}, {
"port": 9031,
"host": "rds02.prd.example.com"
}],
"ldapConfig": {
"url": "ldap://example.inc:389",
"adminDn": "CN=Admin,CN=Users,DC=example,DC=inc",
"adminUsername": "user",
"adminPassword": "secret",
"searchBase": "OU=BigCustomer,OU=customers,DC=example,DC=inc",
"searchFilter": "(example=*)"
},
"log4js": {
"appenders": [
{
"category": "[all]",
"type": "file",
"filename": "./logs/myapp.log"
}
],
"levels": {
"[all]": "ERROR"
}
},
"otherService": {
"basePath" : "http://api.prd.example.com:1234/otherService",
"smokeTestVariable" : "testVar"
}
}
Some parts of this deployment configuration file are more stable than others. While this may vary depending on the application and setup, things like port numbers and usernames I prefer to keep the same across environments for simplicity's sake.
Let me classify the configuration settings:
Stable properties
session
server
log4js.appenders
ldapConfig.adminUsername
ldapConfig.searchFilter
otherService.basePath
redis.port
Environment specific properties
log4js.levels
otherService.smokeTestVariable
Partial-environment specific properties
redis.host: rds01.[environment].example.com
otherService.basePath: http://api.[environment].example.com:1234/otherService
Encrypted environment specific properties
ldapConfig.adminPassword
Questions
How should I create the configuration file? Some options: 1) use a file shipped within the application deployment itself, 2) use a cookbook file template, 3) use a JSON blob as one of the attributes [source #2], 4)... other?
There is a great diversity of variability in the configuration file; how best to manage these using Chef? Roles, environments, per-node configuration, data-bags, encrypted data-bags...? Or should I opt for environment variables instead?
Some key concerns in the approach:
I would prefer there is only 1 way to set the configuration settings.
Changing the configuration file for a developer should be fairly straightforward (they are using Vagrant on their local machines before pushing to test).
The passwords must be secure.
The chef cookbook is managed within the same git repository as the sourcecode.
Some configuration settings require a great deal of flexibility; for example the log4js setting in my example config might contain many more appenders with dozens of fairly unstructured variables.
Any experiences would be much appreciated!
Sources
http://blog.vialstudios.com/the-environment-cookbook-pattern/
http://lists.opscode.com/sympa/arc/chef/2013-01/msg00392.html
http://jtimberman.housepub.org/blog/2013/01/28/local-templates-for-application-configuration/
http://realityforge.org/code/2012/11/12/reusable-cookbooks-revisited.html
Jamie Winsor gave a talk at chefconf that goes further in explaining the environment cookbook pattern's rationale and usage:
Chefcon: talking about self-contained releases, using chef
Slides
In my opinion one of the key concepts this pattern introduces is the idea of using chef environments to control the settings of each application instance. The environment is updated, using berkshelf, with the run-time version of the cookbooks being used by the application.
What is less obvious is that if you decide to reserve a chef environment for the use of a single application instance, it then it becomes safe to use that environment to configure the application's global run-time settings.
An example if given in the berkshelf-api installation instructions. There you will see production environment (for the application) being edited with various run-time settings:
knife environment edit berkshelf-api-production
In conclusion, chef gives us lots of options. I would make the following generic recommendations:
Capture defaults in the application cookbook
Create an environment for each application instance (as recommended by pattern)
Set run-time attribute over-rides in the environment
Notes:
See also the berksflow tool. Designed to make the environment cookbook pattern easier to implement.
I have made no mention of using roles. These can also be used to override attributes at run-time, but might be simpler to capture everything in a dedicated chef environment. Roles seem better suited to capturing information peculiar to a component of an application.