Using cortex for scraping metrics - kubernetes

I'm running cortex on kubernetes in a dev environment to curate a dataset of metrics from multiple applications.
From what I'm reading, cortex utilizes a lot of prometheus source code. Is there a way to configure cortex to scrape metrics like prometheus (based on annotations, maybe?) without having to run instances of prometheus?
This is just for some research, not for production

Take a look at Grafana cloud agent. There is also more lightweight scraper for Prometheus metrics - vmagent.

According to the maintainers, there's no way to do this

Related

Are there available tools to manage kubernetes configuration?

Given a container in a Azure container registry and a kubernetes cluster setup via the portal. Are there any visual tools that I can use so that I don't have to use the command line commands ,for things like add/edit the yaml file and launching the cluster?
For example I found this tool https://k8syaml.com/, but this is only one part of the process and it is also not aware of the existing infrastructure.
What are the visual tools to manage kubernetes end-to-end?
One tool I always work with when dealing with Kubernetes is Lens. Here is a video showing you what it can do. Best of all, it just needs the kube config file and so it is agnostic to where the Kubernetes cluster is (On-Prem, GKE, AKS, EKS)
kubectx for switching between contexts (clusters) & K9s is widely used that is something hybrid between being a cli and visual tool.
Octant is another option - https://github.com/vmware-tanzu/octant, it is similar to lens

How to get data in Anthos Metrics for Kubernetes clusters

We have one project and there are two clusters inside. We would like to monitor and set alert policies for plenty of parameters like kube_pod_status_phase, kube_pod_container_status_restarts_total, etc. We are able to see all these parameters in Metric Explorer (with prefix kubernetes.io/anthos/..) but it doesn't show any data. Can anyone please guide us if any other configurations are missing to use Anthos Metrics? Or if anyone can provide a guide or steps to use Anthos Metrics?
Note: We have Istio configured in both clusters and we are using Workload Identity feature as well.
Any help would be highly appreciated.
Thank you.
I don't think you want to use this metrics.
Anthos, Anthos GKE and GKE are 3 different google products.
GKE:
is an enterprise-grade platform for containerized applications, including stateful and stateless, AI and ML, Linux and Windows, complex and simple web apps, API, and backend services. Leverage industry-first features like four-way auto-scaling and no-stress management. Optimize GPU and TPU provisioning, use integrated developer tools, and get multi-cluster support from SREs.
Anthos
is an open hybrid and multi-cloud application platform that enables you to modernize your existing applications, build new ones, and run them anywhere in a secure manner. Built on open source technologies pioneered by Google—including Kubernetes, Istio, and Knative—Anthos enables consistency between on-premises and cloud environments and helps accelerate application development.
Anthos GKE
is part of Anthos, lets you take advantage of Kubernetes and cloud technology in your data center and in the cloud. You get Google Kubernetes Engine (GKE) experience with quick, managed, and simple installs as well as upgrades validated by Google. And Google Cloud Console gives you a single pane of glass view for managing your clusters across on-premises and cloud environments.
If you will check information about Anthos GKE pricing you can read that:
Anthos is available as a monthly, term-based subscription service. Anthos subscription is required to use Anthos GKE. For pricing please contact sales.
So to get Anthos metrics, you would need to use Anthos GKE, which requires Anthos subscription. It can produce more costs, for details you would probably need to contact sales.
For monitoring purposes you should check possibilities described here and choose what would fit you best.
However, the most used ways are to use Prometheus on GKE and Stackdriver.
In addition, in the web you can find many HowTo regarding Monitoring on GKE like this tutorial.

The best practice for developing app locally which then be deployed to Kubernetes

Let's say I have a flask app, a PostgreSQL, and a Redis app. what is the best practice way to develop those apps locally which then later be deployed to Kubernetes.
Because, I have tried to develop in minikube with ksync, but I get difficulties in getting detailed debug log information.
Any ideas?
What we do with our systems is that we develop and test them locally. I am not very knowledgeable with Flask and ksyncy, but say for example, you are using Lagom Microservices Framework in Java, you run you app locally using the SBT shell where you can view all your logs. We then automate the deployment using LightBend Orchestration.
When you then decide to test the app on Kubernetes, you can choose to use minikube, but you have to configure the logging properly. You can configure centralised logging for Kubernetes using the EFK stack. This will collect all the logs from the various components of your app and store them in Elastic Search. You can then view these logs using The Kibana Dashboard. You can do a lot with the dashboard, you can view logs for a given period, or search logs by k8s namespace, or by container.
There are multiple solutions for this (aka GitOps with Kubernetes):
Skaffold
Draft
Flux - IMO the most mature.
Ksonnet
GitKube
Argo - A bit more of a workflow engine.
Metaparticle - Deploy with actual code.
I think the solution is using skaffold

Why does Flink use Yarn?

I am taking a deep look inside Flink to see how I can use it on a project and had a question for the creators / high level thinkers... why does Flink use Yarn as the default resource manager?
Was Kubernetes considered? Or is it one of those things where we started on Yarn, it works pretty well...
I have come across many projects and articles that allow Kubernetes and Yarn to work together in cluding the Myraid project that allows yarn to be deployed with Mesos (but I am on Kubernetes...)
I have a very large compute cluster 2000 or so nodes that I use and I want to use the super cool CEP features of Flink feeding off a Kafka infrastructure (also deployed on to this kubernetes environment).
I am looking to understand the reasons behind using Yarn as the resource manager underneath Flink and if would be possible (with some effort and contribution to the project) to make Kubernetes an option alongside Yarn.
Please note - I am new to Yarn - just reading up about it. Also new to Flink and learning about the deployment and scale-out architecture.
Flink is not tied to YARN. It can also run on Apache Mesos and there are also users running it on Kubernetes. In the current version (Flink 1.4.1), there are a few things to consider when running Flink in Kubernetes (see this talk by Patrick Lucas).
The Flink community is also currently working on improving Flink's support for container setups. The effort is called FLIP-6 and will be included in the next release (Flink 1.5.0).

How to setup kube-aggregator?

Using Kubernetes 1.6.4 and want to enable custom metrics for autoscaling but it require kube-aggregator, I did not find any doc to setup it and integrate with api servers. Any help
Thanks
You need k8s 1.7+ version to use k8s API aggregator feature, and follow below github project which has all required files to setup a metrics server, custom metrics using Prometheus adaptor
https://github.com/pawankkamboj/k8s-custom-metrics