function' object has no attribute 'agent - chatbot

I am building a food ordering chatbot but while running the online_train.py I encountered the error and due to that I m not able to train my model.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import logging
from rasa_core import utils, train, run
from rasa_core.training import interactive
logger = logging.getLogger(__name__)
def train_agent():
return train.train_dialogue_model(domain_file="./domain.yml",stories_file="./data/dialogue/stories.md", output_path="./models/dialogue/",policy_config="./policies.yml")
I expected the output to be chatbot interacting with the epochs running but I got this error 'function' object has no attribute 'agent'

Related

Pyspark ModuleNotFound when importing custom package

Context: I'm running a script on azure databricks and I'm using imports to import functions from a given file
Let's say we have something like this in a file called "new_file"
from old_file import x
from pyspark.sql import SparkSession
from pyspark.context import SparkContext
from pyspark.sql.types import *
spark = SparkSession.builder.appName('workflow').config(
"spark.driver.memory", "32g").getOrCreate()
The imported funcion "x" will take as argument a string that was read as a pyspark dataframe as such:
new_df_spark = spark.read.parquet(new_file)
new_df = ps.DataFrame(new_df_spark)
new_df is then passed as argument to a function that calls the function x
I then get an error like
ModuleNotFoundError: No module named "old_file"
Does this mean I can't use imports? Or do I need to install the old_file in the cluster for this to work? If so, how would this work and will the package update if I change old_file again?
Thanks

Having Problem in TensorFlow-Tutorials-Image segmentation

I'm using a jupyter notebook.
I followed the code below and entered it.
pip install git+https://github.com/tensorflow/examples.git
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow_examples.models.pix2pix import pix2pix
from IPython.display import clear_output
import matplotlib.pyplot as plt
And I tried to "Download the Oxford-IIIT Pets dataset"
dataset, info = tfds.load('oxford_iiit_pet:3.*.*', with_info=True)
However, in the console, printed this
Downloading and preparing dataset Unknown size (download: Unknown size, generated: Unknown size, total: Unknown size) to ~\tensorflow_datasets\mnist\3.0.1...
and there was no data in the folder created.
Why isn't it working?
Tutorial link:https://www.tensorflow.org/tutorials/images/segmentation

Unable to call Notebook when using scala code in Databricks

I am into a situation where I am able to successfully run the below snippet in azure Databricks from a separate CMD.
%run ./HSCModule
But running into issues when including that piece of code with other scala code which is importing below packages and getting following error.
import java.io.{File, FileInputStream}
import java.text.SimpleDateFormat
import java.util{Calendar, Properties}
import org.apache.spark.SparkException
import org.apache.spark.sql.SparkSession
import scala.collection.JavaConverters._
import scala.util._
ERROR = :168: error: ';' expected but '.' found. %run
./HSCModule
FYI - I have also used dbutils.notebook.run and still facing same issues.
You can't mix the magic commands, like, %run, %pip, etc. with the Scala/Python code in the same cell. Documentation says:
%run must be in a cell by itself, because it runs the entire notebook inline.
So you need to put this magic command into a separate cell.

UnicodeDecodeError on Import Packages statement in Jupyter Notebook

I'm simply trying to import libraries, e.g.:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
and getting this "UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb8 in position 3200: invalid start byte" error.
I'm new to Jupyter Notebooks and wondering if I didn't set something up correctly. I'm attaching the full error message I'm getting.
Any advice would be GREATLY appreciated.
Error Message Here

Scala with filter column

I get the error;
error: not found: value col
when I issue the command in Databricks notebook which I don't get the error when running it from a spark-shell;
altitiduDF.select("height", "terrain").filter(col("height") >= 11000...
^
I tried importing the following before my command query, but it did not help;
import org.apache.spark.sql.Column
import org.apache.spark.sql.Dataset
import org.apache.spark.sql.Row
Where can I find what I need to import to use the col function?
Found that I need to import;
import org.apache.spark.sql.functions.{col}
to use the col function in Databricks.