I have succesfully deployed a stream using spring dataflow in eks, but I need to debug an application of the stream.
I have set up spring.cloud.deployer.kubernetes.environment-variables: JAVA_TOOL_OPTIONS='-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000' in the application I want to debug, and the application starts an it is listening on that port.
Is there any property to tell kubernetes to map this port and make it accessible?
Thank you.
Try this:
And then try a kubectl port-forward
service/YOUR_SERVICE_NAME Host port:Service port
The documentation is really complete btw, there's a lot of information here:
https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/
Thanks #bguess for point me into the rigth direcction.
Finally this is what I have done:
When we are going to deploy With web interface we click edit button of the application we want to debug:
add JAVA_TOOL_OPTIONS='-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000' into environment-variables.
In my case I have kubernetes on aws and is deployed in private mode and the only way I have found in this moment is to create a LoadBalancer for the application. I know it is insecure but itś enough for my needs.
Finally as #bguess pointed we have to add our debug port to serverPorts, this property isn't in the list when we psuh edit button in the application so we have to write it:
So this is the way to configure with the web interface.
If we want to use a terminal in linux or similar we can do this steps:
definition="app-source | app-process | app-sink"
curl "$scdf_url/streams/definitions" --write-out '%{http_code}' --silent --output /dev/null -X POST -d "name=poc-stream&definition=$definition&deploy=false"
Where definition is our stream definition and scdf_url is the spring cloud dataflow server url. After the curl call we will have our stream created but undeployed, To deploy with the debug configuration:
properties="$(cat << EOF
{
"deployer.app-source.kubernetes.environment-variables":
"JAVA_TOOL_OPTIONS=’-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000’",
"deployer.app-source.kubernetes.servicePorts":"8000",
"deployer.app-source.kubernetes.create-load-balancer":"true"
}
EOF
)"
curl "$scdf_url/streams/deployments/poc-stream" --write-out '%{http_code}' --silent --output /dev/null -X POST -H "Content-Type: application/json" -d "$properties"
And that's the way I have configured it.
Additionally you will have to increase inactivity time of the LoadBalancer because his value is 60s and after that time it will disconnects you.
Related
I'm using the VMWare REST API (/api/vcenter/host) to query information about the VM Hosts registered on a vCenter. At the moment I can only seem to get basic info like this :
{
"host": "host-10",
"name": "192.168.18.89",
"connection_state": "CONNECTED",
"power_state": "POWERED_ON"
}
but the Powershell "Get-VMHost | Format-List" has much more useful information such as ESXi version, hardware specs etc.
Can I get this kind of information via the REST API as well?
Thanks!
Yes you can get that kind of information, depending on exactly what info you are wanting. The REST API references are here that you can use to look up what you need. https://developer.vmware.com/apis/vsphere-automation/latest/
For example if you want to know what software is installed you can do something like this.
export basepw=$(echo -n 'administrator#vsphere.local:{password}' | base64)
export token3=$(curl -k -X POST -H "Authorization: Basic ${basepw}" https://{vCenterIP}/api/session/ | tr -d '"')
curl -k -X GET -H "vmware-api-session-id: ${token}" "https://{vCenterIP}/api/esx/software" -H "Content-Type: application/json" -d '{ "auth_type": "EXISTING", "host": "{host-##}"}' | jq .
It looks like vSphere 8 has more options that might fit what you want, like extracting the config https://developer.vmware.com/apis/vsphere-automation/latest/esx/settings/hosts.configuration/
From the looks, some other SDKs would be more developed that the REST API current state. Personally I like govmomi and pyvmomi and both have a CLI tool that can get you started pretty fast. The cli tool for govmomi, govc, doesn't require anything extra to run, so it is fairly portable and might help you with what you are doing.
https://github.com/vmware/govmomi
https://github.com/vmware/pyvmomi
I want to use curl http://127.0.0.1:8033/api/v1 to access https://http2.pro/api/v1 with HTTP/2
This API url will return whether the client using http2.
I have tried: (I'm using latest version 5.0.1)
./mitmdump -p 8033 --http2 --set http2_priority=true --mode reverse:https://http2.pro:443
However curl 127.0.0.1:8033/api/v1 still gives:
{"http2":0,"protocol":"HTTP\/1.1","push":0,"user_agent":"curl\/7.69.1-DEV"}
In contrast, curl https://http2.pro/api/v1 --http2 gives: (this is what I expected)
{"http2":1,"protocol":"HTTP\/2.0","push":0,"user_agent":"curl\/7.69.1-DEV"}
mitmproxy currently does not support converting between HTTP/1 and HTTP/2. For HTTP/2 to happen, both endpoints need to speak it. It is on our todo list and will hopefully be possible soon (https://github.com/mitmproxy/mitmproxy/issues/1775).
I have a large set of GCP Cloud Build Triggers that I invoke via a Cloud scheduler, all running fine.
Now I want to invoke these triggers by an external API call and pass them dynamic parameters that vary in values and number of parameters.
I was able to start a trigger by running an API request but any JSON parameters in the API request that I sent were ignored.
Google talks about substitution parameters at https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values. I define these variables in the cloudbuild.yaml file, however they were not propagated into my shell script from the API request.
I don't any errors with authentication or authorization, so security may not be an issue.
Is my idea supported at all or do I need to resort to another solution such as running a GKE cluster with containers that would expose its API (a very heavy-boxing solution).
We do something similar -- we migrated from Jenkins to GCB but for some people we still need a nicer "UI" to start builds / pass variables.
I got scripts from here and modified them to our own needs: https://medium.com/#nieldw/put-your-build-triggers-into-source-control-with-the-cloud-build-api-ed0c18d6fcac
Here is their REST API: https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.triggers/run
For the script below, keep in mind you need the trigger-id of what you want to run. (you can also get this by parsing the output of another REST API.)
TRIGGER_ID=1
# we need to specify ATLEAST the branch name or commit id (check after)
BRANCH_OR_SHA=$2
# check if branch_name or commit_sha
if [[ $BRANCH_OR_SHA =~ [0-9a-f]{5,40} ]]; then
# is COMMIT_HASH
COMMIT_SHA=$BRANCH_OR_SHA
BRANCH_OR_SHA="\"commitSha\": \"$COMMIT_SHA\""
else
# is BRANCH_NAME
BRANCH_OR_SHA="\"branchName\": \"$BRANCH_OR_SHA\""
fi
# This is the request we send to google so it knows what to build
# Here we're overriding some variables that we have already set in the default 'cloudbuild.yaml' file of the repo
cat <<EOF > request.json
{
"projectId": "$PROJECT_ID",
$BRANCH_OR_SHA,
"substitutions": {
"_MY_VAR_1": "my_value",
"_MY_VAR_2": "my_value_2"
}
}
EOF
# our curl post, we send 'request.json' with info, add our Token, and set the trigger_id
curl -X POST -T request.json -H "Authorization: Bearer $(gcloud config config-helper \
--format='value(credential.access_token)')" \
https://cloudbuild.googleapis.com/v1/projects/"$PROJECT_ID"/triggers/"$TRIGGER_ID":run
I am trying get current component version of component that is deployed to an environment in UCD via REST API/Curl command . Below is sample code which returns all versions of that component which is available in UCD. It does not give me latest version of that component deployed to an environment. Any help / suggestion?
curl -k -u userName:passw0rd \
-H "Accept: application/json" \
"https://myserver.example.com:8443/rest/deploy/environment/{environmentID}/versions/{componentID}"
uDeploy has a bunch of api endpoints that are undocumented. I could not figure out how to do this from their docs but inspecting the uDeploy web interface many times can help you find the endpoint to hit.
https://{your-udeploy-url}/rest/deploy/environment/{your-environment-id}/latestDesiredInventory/true?rowsPerPage=10000&pageNumber=1&orderField=name&sortType=desc
This will return json that you can parse to get the versions deployed in an environment.
I have tried this tutorial. But it didn't catch the OSSEC log (alerts, syslog, etc), it just give me this message for my Kibana apps.
Couldn't find any Elasticsearch data
You'll need to index some data into Elasticsearch before you can create an index pattern.
I know that there is some tutorial like this. But it's required to use wazuh package and I dont want to use it, I just want to use the pure OSSEC. My OSSEC and ELK apps are located in the samw machine
My question is, How can I integrate OSSEC with ELK ? What configuration do i have to do first before starting connected OSSEC to ELK ?
You need to load the data template so that Elastisearch can understand the format of the alert data. You can use the one made by Wazuh, or you could download it and modify it to "make your own". If you go down this road you will eventually end up trying to re-write Wazuh, which you don't need to do because it is open source. You can just download all the source files and do whatever you want with them.
Command to load template:
curl https://raw.githubusercontent.com/wazuh/wazuh/3.2/extensions/elasticsearch/wazuh-elastic6-template-alerts.json | curl -XPUT 'http://localhost:9200/_template/wazuh' -H 'Content-Type: application/json' -d #-
Download Template:
https://raw.githubusercontent.com/wazuh/wazuh/3.2/extensions/elasticsearch/wazuh-elastic6-template-alerts.json
-OR-
You could just spin up a Docker container that is ready to go:
https://github.com/wazuh/docker-ossec-elk