How do I set up a postgres connection on airflow for the postgres operator - postgresql

For the rtfm crowd, let me document my suffering.
I went here:
https://betterdatascience.com/apache-airflow-postgres-database/
But my ui has UNAUTHORIZED in pink after I add the info.
I also went here:
https://airflow.apache.org/docs/apache-airflow-providers-postgres/stable/connections/postgres.html
But obvious questions remain. Which file? what is the format of the default data. Why can't I just make a connection string and put it somewhere.
I also read this, which doesn't tell us where to put this information, it only tells us how to programmatically override it. It did give me this golden nugget:
Which would have been another stack overflow question.
Is there a file I should type my connection information or connection string into that has examples already?

I solved it. There's no secret file like airflow.cfg. It's hidden away a database somewhere I set up long ago familiar to nobody who doesn't do this full time. To update or add connections, you either have to use the ui, which doesn't work for me, or you have to use the cli and type airflow connections add --conn-uri and the string we all know and love.
Since I wasn't born with the knowledge of all the commands available under the cli, I googled here:
https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html

Related

Trying to attach a GCS bucket to Datalore

(I asked this also on Datalore's forum. There doesn't seem to be much going on there -- so I'm asking here in the hope of a better/quicker response.)
I'm having trouble attaching GCS buckets. The documentation is sparse. All that I could find is here, which simply says:
In the New datasource dialog, fill in the fields and click Save and close.
Here's that dialog and I'm not sure what information to put here.
What I tried
GCS datasource name
I believe it's for reference within Datalore, correct? So can I just put anything here? I wrote "patant-data-ors".
bucket
Options I tried:
patent-data-ors (this is the name of the bucket)
storage.googleapis.com/patent-data-ors
patent-data-ors.storage.googleapis.com
Also tried 2 and 3 with https://.
key_file_content
I left it blank. I'm guessing it's for private buckets? Mine is publicly accessible.
What am I doing wrong?

Using MongoDB connection string in a Github repo

This might be kind of a weird question, but I have a full-stack project that I am using MongoDB for the database. I am about to put it on a local Github repository. Obviously in the connection string, I have a username & password which I would rather not make public. Does anyone know of a more secure way of doing this?
The whole purpose of this project is to add it to my portfolio, so future employers can see it and potentially try it out. Which means I want it to be as hassle free as possible. I've never done this before so I don't even know if someone who wants to use it would have to set up their own Mongo database just to get it to work properly or if my database can be use for everybody who would potentially want to try it out.
I don't really know what I am doing here.
You need to setup environments files and add them in gitignore file.
Then use dotenv for reading the values inside the file.
Article for step by step guide: https://www.coderrocketfuel.com/article/store-mongodb-credentials-as-environment-variables-in-nodejs
You can use mongodb://localhost as the default connection string, committing this to the repository and using something like dotenv to override the connection string in your application at runtime.

Setting up Dynamic Links in Firebase with Wordpress site

I am really struggling here... All I actually want to achieve is that I can get the Generate-Strong-Password function inside my app but that is actually harder than I thought.
I learned that I should go with Firebase Dynamic Links because I have a Wordpress-Website from All-Inkl.com.
I followed this Tutorial and there is actually an Apple-Site-Association-File at the moment. But I can't access my Website anymore as it looks like this:
Inside my Firebase Project I am getting this error which says that there not all the necessary "A-Files" are inside my Website:
My DNS-Settings:
I've been struggling for weeks now to get this done so if anyone has any idea how I can fix it I would be extremely grateful!! (btw, I am a total newbie when it comes to websites; I know my way around Swift though)
It seems that different domain providers accept different values for DNS entries ('A records' = 'A-Datensätze', in this case).
Try editing the entries for the Host field (which currently hold your website's URL) to one of the 'common inputs' listed here: https://firebase.google.com/docs/hosting/custom-domain?hl=de#domain-key
As the URL to your site doesn't seem to be what your provider accepts, I would suggest you try replacing it with the next option, i.e. replacing it with # .
Hope this helps solving your issue!

how to get schema for OpenShift/K8s resources, e.g. How to get schema definition for deployment config or pod

When I'm creating resources for OpenShift/K8s, I might be out of coverage area. I'd like to get schema definition being offline.
How I can get from command line a schema for a kind. For example I would like to get a generic schema for Deployment, DeploymentConfig, Pod or a Secret.
Is there a way to get schema without using google? Ideally if I could get some documentation description for it.
Posting #Graham Dumpleton comment as a community wiki answer basing on the response from OP saying that it solved his/her problem:
Have you tried running oc explain --recursive Deployment? You still
need to be connected when you generate it, so you would need to save
it to a file for later reference. Maybe also get down and read the
free eBook at openshift.com/deploying-to-openshift which mentions this
command and lots of other stuff as well. – Graham Dumpleton
Are you familiar with OpenAPI/Swagger? It is supported in OpenShift/Kubernetes. Read more here.

Best practice to handle logging via Cloud-Watch with Spring-Cloud-docker in ecs [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I try for several days now to set up my docker-container with a spring-cloud app running to
log on an ec2-instance with "docker logs"
show my logging-entries in a logGroup at Cloud-Watch
My best result until now: I got in Cloud-Watch the upper 20 lines of the log (Spring-Brand in Ascii-Art and Spring version) and that's it. At that time the docker on EC2-instance showed the same with "docker logs", however the same docker on my system logged like usually.
However, most of my attempts showed neither logging via "docker logs" nor via Cloud-Watch. But again, my app-logging via log4j2 by console-appender runs exactly like configured.
Even my local docker with the app.jar logs as expected (Windows7, Docker Toolbox, no Linux possible, unfortunately). Only on the EC2-instance there is silence with "docker logs".
Configuration:
Do I have the right docker image? openjdk:8-jdk-alpine
Are my Spring/AWS-Dependencies correct (we use mainly SQS): spring-cloud-aws-messaging, spring-cloud-aws-autoconfigure, spring-boot-starter-web, aws-java-sdk-sts
Do I need these logging-Dependencies, that I use now?
spring-boot-starter-log4j2 (um alle Brücken für CommonsLogging zu
haben)
Might the used logging in my app (log4j2 via slf4j) be the problem?
I tried the Console-Appender=>STDOUT=>awslogs way. And I tried (additionally or in exchange) cloud-watch-appenders(com.boxfuse.cloudwatchlogs:cloudwatchlogs-java-appender and pro.apphub:aws-cloudwatch-log4j2). With both appenders I fixed some initial configuration-errors and then I saw... nothing on Cloud-Watch.
In the ECS-Task I tried "awslogs"-Configuration to get Docker-STDOUT in Cloudwatch (that's what lead to the above mentioned first 20lines of logging). And I tried "json-file"-config to see something on EC2-instance with "docker logs". Both didn't lead to the whished result.
Can you perhaps give me a hint of something I might have missed?
Why only the first 20 lines of the log (Spring-Header?)
Why don't the appenders show the whished result? I hoped it would be as simple as "Graylog"... chose the right appender-config in log-config and voila, there are the log-entries.
Do you have some links for tutorials where logging from spring-cloud to Cloud-Watch is the topic with all necessary parts and steps explained?
Do you have some snippets (Pom, task-json, other hints), that might help me get this done?
Should it indeed be better to change for an "everything ready" solution like Boxfuse?
Thank you a lot!
PS: I know there are solutions with ELK-Stack and others, but I really would like to try Cloud-Watch first.
You can leave these fields of the task-definition, they said. The
system will choose an appropriate default, they said. Don't bother,
they said.
Seems that I should have set a value for CPUs in the task-definition... The first couple of lines of log seem to be the only thing, that the 0-CPU-Task is able to produce... no other error message.
I lough crying...