buildbot: not building, instead shows a "?" - buildbot

I'm trying to use buildbot for CI purposes. I have setup a buildmaster and buildslave. And, they are both connected. (I'm attaching my master.cfg below)
I have the following problems:
a) I can see the changes committed on the Waterfall page, which means SVNPoller is working fine. However, none of the changes are getting built. I get a "?" on the buildbot page.
b) When I try to do a force build from http:// localhost:8010/builders, I get an error in the logs:
[HTTPChannel,1,10.0.0.58] ..but not authorized
c = BuildmasterConfig = {}
from buildbot.buildslave import BuildSlave
c['slaves'] = [BuildSlave("example-slave", "pass")]
c['slavePortnum'] = 9989
from buildbot.changes.svnpoller import SVNPoller
c['change_source'] = []
c['change_source'].append(SVNPoller(
'file:///my/repo/path/trunk',
pollinterval=300))
from buildbot.schedulers.basic import SingleBranchScheduler
from buildbot.schedulers.forcesched import ForceScheduler
from buildbot.changes import filter
c['schedulers'] = []
c['schedulers'].append(SingleBranchScheduler(
name="all",
change_filter=filter.ChangeFilter(branch='trunk'),
treeStableTimer=None,
builderNames=["runtests"]))
c['schedulers'].append(ForceScheduler(
name="force",
builderNames=["runtests"]))
from buildbot.process.factory import BuildFactory
from buildbot.steps.source import Git
from buildbot.steps.source import SVN
from buildbot.steps.shell import ShellCommand
from buildbot.steps import source, shell
from buildbot.process import factory
f = factory.BuildFactory()
f.addStep(source.SVN(svnurl="file:///my/repo/path/trunk/", mode="copy"))
f.addStep(shell.ShellCommand(command=["cmake", "."]))
f.addStep(shell.ShellCommand(command=["make", "all"]))
from buildbot.config import BuilderConfig
c['builders'] = []
c['builders'].append(
BuilderConfig(name="runtests",
slavenames=["example-slave"],
factory=f))
c['status'] = []
from buildbot.status import html
from buildbot.status.web import authz, auth
authz_cfg=authz.Authz(
auth=auth.BasicAuth([("userid","password")]),
gracefulShutdown = False,
forceBuild = 'auth', # use this to test your slave once it is set up
forceAllBuilds = True,
pingBuilder = False,
stopBuild = False,
stopAllBuilds = False,
cancelPendingBuild = False,
)
c['status'].append(html.WebStatus(http_port=8010, authz=authz_cfg))
c['title'] = "My Project"
c['titleURL'] = "http://my/url"
c['buildbotURL'] = "http://localhost:8010/"
c['db'] = {
'db_url' : "sqlite:///state.sqlite",
}

Change the following value:
forceBuild = True

Related

FastAPI-Mail TEMPLATE_FOLDER does not exist

I was following this example for sending an email using FastAPI HTML templates but it was showing an error related to the templates directory.
conf = ConnectionConfig(
File "pydantic/env_settings.py", line 38, in pydantic.env_settings.BaseSettings.__init__
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ConnectionConfig
TEMPLATE_FOLDER
file or directory at path "/Users/abushoeb/myproject/templates" does not exist (type=value_error.path.not_exists; path=/Users/abushoeb/myproject/templates)
The solution is almost identical to the example with minor modification in the TEMPLATE_FOLDER value.
The whole codebase is available at abushoeb/fastapi.
from fastapi import (
FastAPI,
BackgroundTasks,
UploadFile, File,
Form,
Query,
Body,
Depends
)
from starlette.responses import JSONResponse
from starlette.requests import Request
from fastapi_mail import FastMail, MessageSchema, ConnectionConfig
from pydantic import EmailStr, BaseModel
from typing import List, Dict, Any
from fastapi_mail.email_utils import DefaultChecker
from pathlib import Path
class EmailSchema(BaseModel):
email: List[EmailStr]
body: Dict[str, Any]
class Config:
schema_extra = {
"example": {
"email": ["type recipient email addess"],
"subject": "FastAPI Templated Mail",
"body": {"first_name": "recipient first name",
"last_name": "recipient last name"},
}
}
BASE_DIR = Path(__file__).resolve().parent
conf = ConnectionConfig(
MAIL_USERNAME = "YourUsername",
MAIL_PASSWORD = "strong_password",
MAIL_FROM = "your#email.com",
MAIL_PORT = 587,
MAIL_SERVER = "your mail server",
MAIL_FROM_NAME="Desired Name",
MAIL_TLS=True,
MAIL_SSL=False,
# USE_CREDENTIALS = True,
# VALIDATE_CERTS = True,
TEMPLATE_FOLDER = Path(BASE_DIR, 'templates')
)
app = FastAPI(title="Email - FastAPI", description="Sample Email Script")
#app.post("/email")
async def send_with_template(email: EmailSchema) -> JSONResponse:
message = MessageSchema(
subject="Fastapi-Mail with HTML Templates",
recipients=email.dict().get("email"), # List of recipients, as many as you can pass
template_body=email.dict().get("body"),
)
fm = FastMail(conf)
await fm.send_message(message, template_name="email_template.html")
return JSONResponse(status_code=200, content={"message": "email has been sent"})

Get list of all notebooks in my databricks workspace

How do I get a list of all notebooks in my workspace & store their names along with full path in csv file, I have tried using Databricks CLI option but that doesn't seem to have recursive operation.
databricks workspace list
As we can see in code there is no recursive option:
https://github.com/databricks/databricks-cli/blob/master/databricks_cli/workspace/cli.py (def ls_cli)
Example solution is to import cli in python and extend it:
from databricks_cli.sdk import ApiClient
from databricks_cli.sdk import service
host = "your_host"
token = "your_token"
client = ApiClient(host=host, token=token)
objects = []
workspace = service.WorkspaceService(client)
def list_workspace_objects(path):
elements = workspace.list(path).get('objects')
if elements is not None:
for object in elements:
objects.append(object)
if(object['object_type'] == 'DIRECTORY'):
list_workspace_objects(object['path'])
list_workspace_objects("/")
print(objects)
You can use below code directly . Note : Tested Code
from pyspark.sql.types import IntegerType
from pyspark.sql.types import *
from pyspark.sql import Row
import base64
import requests
import json
databricks_instance ="databricks Instance"
url_list = f"{databricks_instance}/api/2.0/workspace/list"
url_export = f"{databricks_instance}/api/2.0/workspace/export"
payload = json.dumps({
"path": "/"
})
headers = {
'Authorization': 'Bearer token',
'Content-Type': 'application/json'
}
response = requests.request("GET", url_list, headers=headers, data=payload).json()
notebooks = []
# Getting the all notebooks list for given notebooks.
def list_notebooks(mylist):
for element in mylist['objects']:
if element['object_type'] == 'NOTEBOOK':
notebooks.append(element)
if element['object_type'] == 'DIRECTORY':
payload_inner = json.dumps({
"path": element['path']
})
response_inner = requests.request("GET", url_list, headers=headers, data=payload_inner).json()
if len(response_inner) != 0:
list_notebooks(response_inner)
return notebooks
result = list_notebooks(response)
print(result[0])

How to fix the field error issue in django?

I try to navigate from list page to detail page, when i tried with the below code. I got error stating that field error. For that I've tried with adding a empty Slug field in models, it shows an page not found error.
#urls.py
from django.urls import path
from .views import (TaskListView,TaskDetailView)
app_name = 'Tasks'
urlpatterns = [
path('', TaskListView.as_view(), name='list'),
path('<slug:slug>/', TaskDetailView.as_view(), name='detail'),
]
#views.py
from django.shortcuts import render
from django.http import HttpResponse
# Create your views here.
from django.views.generic import ListView, DetailView, View
from .models import Taskmanager
def home(request):
return render(request, 'home.html')
class TaskListView(ListView):
template_name = 'Tasks.html'
model = Taskmanager
context_object_name = 'data'
class TaskDetailView(DetailView):
template_name = 'detail.html'
model = Taskmanager
context_object_name = 'data'
#models.py
from django.db import models
from django.urls import reverse
# Create your models here.
week_number = (("week01", "week01"),
("week02", "week02"),
("week03", "week03"),
("week04", "week04"),
("week05", "week05"),
("week06", "week06"),
("week07", "week07"),
("week08", "week08"),
("week09", "week09"),
("week10", "week10"),
("week11", "week11"),
("week12", "week12"),
("week13", "week13"),
("week14", "week14"),
("week15", "week15"),
("week16", "week16"),
("week17", "week17"),
("week18", "week18"),
("week19", "week19"),
("week20", "week20"),
("week21", "week21"),
("week22", "week22"),
("week23", "week23"),
("week24", "week24"),
("week25", "week25"),
("week26", "week26"),
("week27", "week27"),
("week28", "week28"),
("week29", "week29"),
("week30", "week30"),
("week31", "week31"),
("week32", "week32"),
("week33", "week33"),
("week34", "week34"),
("week35", "week35"),
("week36", "week36"),
("week37", "week37"),
("week38", "week38"),
("week39", "week39"),
("week40", "week40"),
("week41", "week41"),
("week42", "week42"),
("week43", "week43"),
("week44", "week44"),
("week45", "week45"),
("week46", "week46"),
("week47", "week47"),
("week48", "week48"),
("week49", "week49"),
("week50", "week50"),
("week51", "week51"),
("week52", "week52"),
("week53", "week53"),
)
class Taskmanager(models.Model):
CurrentSprint = models.CharField(max_length=10, default="week01",
choices=week_number)
todaydate = models.DateField()
taskname = models.SlugField(max_length=200)
testrun = models.URLField(max_length=300)
comments = models.CharField(max_length=300)
assignedto = models.EmailField(max_length=70)
def __str__(self):
return self.taskname
def get_absolute_url(self):
return reverse('Tasks:detail', kwargs={'slug': self.taskname})
#Tasks.html
<a href="{% url 'Tasks:detail' slug='detail'%}"> {{Taskmanager.todaydate}}
</a>
I need an output when I click the link, it needs to navigate to the details page where the details of the task needs to be displayed.
try adding this
#views.py
class TaskDetailView(DetailView):
...
def get_object(self):
instance = get_object_or_404(Taskmanager, slug=self.kwargs['slug'])
return instance
#models.py
django.db.models.signals import pre_save
class Taskmanager(models.Model):
...
taskname = models.CharField(max_length=200)
slug = models.SlugField(max_length=100)
...
def pre_save_Taskmanager_receiver(instance, *args, **kwargs):
if not instance.slug:
instance.slug = instance.taskname
pre_save.connect(pre_save_Taskmanager_receiver, sender= Taskmanager)
# task.html
{{ data.todaydate }}

Buildbot configuration error

I have installed buildbot master and slave and when i am running the slave after starting the master this is my master script for a build name simplebuild.
c = BuildmasterConfig = {}
c['status'] = []
from buildbot.status import html
from buildbot.status.web import authz, auth
authz_cfg=authz.Authz(
auth=auth.BasicAuth([("slave1","slave1")]),
gracefulShutdown = False,
forceBuild = 'auth',
forceAllBuilds = False,
pingBuilder = False,
stopBuild = False,
stopAllBuilds = False,
cancelPendingBuild = False,
)
c['status'].append(html.WebStatus(http_port=8010, authz=authz_cfg))
from buildbot.process.factory import BuildFactory
from buildbot.steps.source import SVN
from buildbot.steps.shell import ShellCommand
qmake = ShellCommand(name = "qmake",
command = ["qmake"],
haltOnFailure = True,
description = "qmake")
makeclean = ShellCommand(name = "make clean",
command = ["make", "clean"],
haltOnFailure = True,
description = "make clean")
checkout = SVN(baseURL = "file:///home/aguerofire/buildbottestsetup/codeRepo/",
mode = "update",
username = "pawan",
password = "pawan",
haltOnFailure = True )
makeall = ShellCommand(name = "make all",
command = ["make", "all"],
haltOnFailure = True,
description = "make all")
f_simplebuild = BuildFactory()
f_simplebuild.addStep(checkout)
f_simplebuild.addStep(qmake)
f_simplebuild.addStep(makeclean)
f_simplebuild.addStep(makeall)
from buildbot.buildslave import BuildSlave
c['slaves'] = [
BuildSlave('slave1', 'slave1'),
]
c['slavePortnum'] = 13333
from buildbot.config import BuilderConfig
c['builders'] = [
BuilderConfig(name = "simplebuild", slavenames = ['slave1'], factory = f_simplebuild)
]
from buildbot.schedulers.basic import SingleBranchScheduler
from buildbot.changes import filter
trunkchanged = SingleBranchScheduler(name = "trunkchanged",
change_filter = filter.ChangeFilter(branch = 'master'),
treeStableTimer = 10,
builderNames = ["simplebuild"])
c['schedulers'] = [ trunkchanged ]
from buildbot.changes.svnpoller import SVNPoller
svnpoller = SVNPoller(svnurl = "file:///home/aguerofire/buildbottestsetup/codeRepo/",
svnuser = "pawan",
svnpasswd = "pawan",
pollinterval = 20,
split_file = None)
c['change_source'] = svnpoller
After running this script when i check at browser the status of the build then i am not getting any status of the build.
detail inside the waterfall view is
My first question is where actual build is performed at master's end or slave's end ?
What can be the problem in the configuring of buildbot as i have made an error in the commit and was trying to find out weather it will be shown in the waterfall display...but again no error and the same screen as coming in the console view and waterfall view ?
Builds are run on a slave, master just manages schedulers, builders and slaves.
It seems that builds are not run. As for your second screenshot, it shows change info, but not build info. What does your "builders" tab show?

Flask, blueprints uses celery task and got cycle import

I have an application with Blueprints and Celery
the code is here:
config.py
import os
from celery.schedules import crontab
basedir = os.path.abspath(os.path.dirname(__file__))
class Config:
SECRET_KEY = os.environ.get('SECRET_KEY') or ''
SQLALCHEMY_COMMIT_ON_TEARDOWN = True
RECORDS_PER_PAGE = 40
SQLALCHEMY_DATABASE_URI = ''
CELERY_BROKER_URL = ''
CELERY_RESULT_BACKEND = ''
CELERY_RESULT_DBURI = ''
CELERY_TIMEZONE = 'Europe/Kiev'
CELERY_ENABLE_UTC = False
CELERYBEAT_SCHEDULE = {}
#staticmethod
def init_app(app):
pass
class DevelopmentConfig(Config):
DEBUG = True
WTF_CSRF_ENABLED = True
APP_HOME = ''
SQLALCHEMY_DATABASE_URI = 'mysql+mysqldb://...'
CELERY_BROKER_URL = 'sqla+mysql://...'
CELERY_RESULT_BACKEND = "database"
CELERY_RESULT_DBURI = 'mysql://...'
CELERY_TIMEZONE = 'Europe/Kiev'
CELERY_ENABLE_UTC = False
CELERYBEAT_SCHEDULE = {
'send-email-every-morning': {
'task': 'app.workers.tasks.send_email_task',
'schedule': crontab(hour=6, minute=15),
},
}
class TestConfig(Config):
DEBUG = True
WTF_CSRF_ENABLED = False
TESTING = True
SQLALCHEMY_DATABASE_URI = 'mysql+mysqldb://...'
class ProdConfig(Config):
DEBUG = False
WTF_CSRF_ENABLED = True
SQLALCHEMY_DATABASE_URI = 'mysql+mysqldb://...'
CELERY_BROKER_URL = 'sqla+mysql://...celery'
CELERY_RESULT_BACKEND = "database"
CELERY_RESULT_DBURI = 'mysql://.../celery'
CELERY_TIMEZONE = 'Europe/Kiev'
CELERY_ENABLE_UTC = False
CELERYBEAT_SCHEDULE = {
'send-email-every-morning': {
'task': 'app.workers.tasks.send_email_task',
'schedule': crontab(hour=6, minute=15),
},
}
config = {
'development': DevelopmentConfig,
'default': ProdConfig,
'production': ProdConfig,
'testing': TestConfig,
}
class AppConf:
"""
Class to store current config even out of context
"""
def __init__(self):
self.app = None
self.config = {}
def init_app(self, app):
if hasattr(app, 'config'):
self.app = app
self.config = app.config.copy()
else:
raise TypeError
init.py:
import os
from flask import Flask
from celery import Celery
from config import config, AppConf
def create_app(config_name):
app = Flask(__name__)
app.config.from_object(config[config_name])
config[config_name].init_app(app)
app_conf.init_app(app)
# Connect to Staging view
from staging.views import staging as staging_blueprint
app.register_blueprint(staging_blueprint)
return app
def make_celery(app=None):
app = app or create_app(os.getenv('FLASK_CONFIG') or 'default')
celery = Celery(__name__, broker=app.config.CELERY_BROKER_URL)
celery.conf.update(app.conf)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
tasks.py:
from app import make_celery, app_conf
cel = make_celery(app_conf.app)
#cel.task
def send_realm_to_fabricdb(realm, form):
some actions...
and here is the problem:
The Blueprint "staging" uses task send_realm_to_fabricdb, so it makes: from tasks import send_realm_to_fabricdb
than, when I just run application, everything goes ok
BUT, when I'm trying to run celery: celery -A app.tasks worker -l info --beat, it goes to cel = make_celery(app_conf.app) in tasks.py, got app=None and trying to create application again: registering a blueprint... so I've got cycle import here.
Could you tell me how to break this cycle?
Thanks in advance.
I don't have the code to try this out, but I think things would work better if you move the creation of the Celery instance out of tasks.py and into the create_app function, so that it happens at the same time the app instance is created.
The argument you give to the Celery worker in the -A option does not need to have the tasks, Celery just needs the celery object, so for example, you could create a separate starter script, say celery_worker.py that calls create_app to create app and cel and then give it to the worker as -A celery_worker.cel, without involving the blueprint at all.
Hope this helps.
What I do to solve this error is that I create two Flask instance which one is for Web app, and another is for initial Celery instance.
Like #Miguel said, I have
celery_app.py for celery instance
manager.py for Flask instance
And in these two files, each module has it's own Flask instance.
So I can use celery.task in Views. And I can start celery worker separately.
Thanks Bob Jordan, you can find the answer from https://stackoverflow.com/a/50665633/2794539,
Key points:
1. make_celery do two things at the same time: create celery app and run celery with flask content, so you can create two functions to do make_celery job
2. celery app must init before blueprint register
Having the same problem, I ended up solving it very easily using shared_task (docs), keeping a single app.py file and not having to instantiate the flask app multiple times.
The original situation that led to the circular import:
from src.app import celery # src.app is ALSO importing the blueprints which are importing this file which causes the circular import.
#celery.task(bind=True)
def celery_test(self):
sleep(5)
logger.info("Task processed by Celery.")
The current code that works fine and avoids the circular import:
# from src.app import celery <- not needed anymore!
#shared_task(bind=True)
def celery_test(self):
sleep(5)
logger.info("Task processed by Celery.")
Please mind that I'm pretty new to Celery so I might be overseeing important stuff, it would be great if someone more experienced can give their opinion.