Can't import directory in filecoin lotus node - filecoin

I am running a lotus client, but when I run:
lotus client import <path/to/directory>
I get:
ERROR: failed to import file using unixfs: failed to import file to store to compute root: read /home/patrick_alphachain_io/code/unstoppable-ui/unstoppable-ui-static-export: is a directory
Can we not upload directories like we can with IPFS? Is there a way to just upload an IPFS CID?

You can wrap your folder into a .car file and upload that to filecoin.
You could use a tool like ipfs-car.
yarn ipfs-car --pack /path/to/directory/ --output my-output.car
And then upload that to filecoin
lotus client import --car my-output.car

Related

How to solve "ERROR: Import of Jasper report server export zip failed!"

I am getting the following error when trying to run js-import script as root user.
INFO: Jasper report server install successful!
PORTAL_HOME =
cp: cannot stat ‘/jasper/install/distributions/jasperReports/lib/*’: No such file or directory
Using CATALINA_BASE: /opt/jasper/tomcat/apache-tomcat-9.0.56
Using CATALINA_HOME: /opt/jasper/tomcat/apache-tomcat-9.0.56
Using CATALINA_TMPDIR: /opt/jasper/tomcat/apache-tomcat-9.0.56/temp
Using JRE_HOME: /usr/java/jdk1.8.0_312
Using CLASSPATH: /opt/jasper/tomcat/apache-tomcat-9.0.56/bin/bootstrap.jar:/opt/jasper/tomcat/apache-tomcat-9.0.56/bin/tomcat-juli.jar
Using CATALINA_OPTS:
Using CATALINA_PID: /opt/jasper/tomcat/apache-tomcat-9.0.56/catalina_pid.txt
Existing PID file found during start.
Removing/clearing stale PID file.
Tomcat started.
Waiting on Tomcat to start - Retry 1/5 in 60s
INFO: Tomcat started Successfully!
Running the JS import script [/jasper/install/scripts/runJSTestInstall.sh.tmp].
ERROR: Import of Jasper report server export zip failed!
ERROR: Main install failed while executing child script [installJasperReports]!
INFO: Check the [installJasperReports.log] file for more details.
INFO: Fix the issue and then run [uninstallJasperReports.sh /jasper/install] followed by [installAll.sh /jasper/install installJasperReports] to resume.
I get the below log in js-import file relevant to the above error.
VALIDATION COMPLETED
Total time: 4 seconds
Executing CE version
First resource path: /opt/jasper/jasperReports/jasperreports-server-cp-8.0.0-bin/buildomatic/conf_source/ieCe
Loading configuration resources
Initialization complete
Processing started
Tenant not found with Tenant ID "organizations"
Please give a solution for this!

How to reconcile the Terraform State with an existing bucket?

Using Terraform 11.14
My terraform file contains the following resource:
resource "google_storage_bucket" "assets-bucket" {
name = "${local.assets_bucket_name}"
storage_class = "MULTI_REGIONAL"
force_destroy = true
}
And this bucket has already been created (it exists on the infrastructure based on a previous apply)
However the state (remote on gcs) is inconsistent and doesn't seem to include this bucket.
As a result, terraform apply fails with the following error:
google_storage_bucket.assets-bucket: googleapi: Error 409: You already own this bucket. Please select another name., conflict
How can I reconcile the state? (terraform refresh doesn't help)
EDIT
Following #ydaetskcoR's response, I did:
terraform import module.bf-nathan.google_storage_bucket.assets-bucket my-bucket
The output:
module.bf-nathan.google_storage_bucket.assets-bucket: Importing from ID "my-bucket"...
module.bf-nathan.google_storage_bucket.assets-bucket: Import complete! Imported google_storage_bucket (ID: next-assets-bf-nathan-botfront-cloud)
module.bf-nathan.google_storage_bucket.assets-bucket: Refreshing state... (ID: next-assets-bf-nathan-botfront-cloud)
Error: module.bf-nathan.provider.kubernetes: 1:11: unknown variable accessed: var.cluster_ip in:
https://${var.cluster_ip}
The refreshing step doesn't work. I ran the command from the project's root where a terraform.tfvars file exists.
I tried adding -var-file=terraform.tfvars but no luck. Any idea?
You need to import it into the existing state file. You can do this with the terraform import command for any resource that supports it.
Thankfully the google_storage_bucket resource does support it:
Storage buckets can be imported using the name or project/name. If the project is not passed to the import command it will be inferred from the provider block or environment variables. If it cannot be inferred it will be queried from the Compute API (this will fail if the API is not enabled).
e.g.
$ terraform import google_storage_bucket.image-store image-store-bucket
$ terraform import google_storage_bucket.image-store tf-test-project/image-store-bucket

Pyspark not working and throwing java exception: Java gateway process exited before sending its port number

I started working with pyspark, i have installed it and running on jupyter-notebook, here is the problem i am facing about gateway process failing. I have even tried setting $JAVA_HOME but it didn't work. I want to know what is the reason behind and how to fix it.
Error in jupyter-notebook
Exception: Java gateway process exited before sending its port number
.bashrc file
export PYTHONPATH=/usr/lib/python3.6
export SPARK_HOME='/home/junaid/spark-2.4.0-bin-hadoop2.7'
export PATH=$SPARK_HOME:$PATH
export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH
export PYSPARK_PYTHON=python3
export PYSPARK_DRIVER_PYTHON=jupyter
export PYSPARK_DRIVER_PYTHON_OPTS='notebook'
export HADOOP_HOME=$HOME/hadoop-2.7.3
export HADOOP_CONF_DIR=$HOME/hadoop-2.7.3/etc/hadoop
export HADOOP_MAPRED_HOME=$HOME/hadoop-2.7.3
export HADOOP_COMMON_HOME=$HOME/hadoop-2.7.3
export HADOOP_HDFS_HOME=$HOME/hadoop-2.7.3
export YARN_HOME=$HOME/hadoop-2.7.3
export PATH=$PATH:$HOME/hadoop-2.7.3/bin
export JAVA_HOME=/usr/lib/jvm/java-8-oracle/
export PATH="$PATH:/opt/mssql-tools/bin"
notebook-code
from pyspark import SparkContext
sc = SparkContext("local")
I have even tried with
sc = SparkContext("local", "count app")
but didn't work.

How can I run NLTK on App Engine or Kubernetes?

I am busy writing a model to predict types of text like names or dates on a pdf document.
The model uses nltk.word_tokenize and nltk.pos_tag
When I try to use this on Kubernetes on Google Cloud Platform I get the following error:
from nltk.tag import pos_tag
from nltk.tokenize import word_tokenize
tokenized_word = tokenize_word('x')
tagges_word = pos_tag(['x'])
stacktrace:
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
Searched in:
- '/root/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- '/env/nltk_data'
- '/env/share/nltk_data'
- '/env/lib/nltk_data'
- ''
But obviously downloading it to your local device will not solve the problem if it has to run on Kubernetes and we do not have NFS set up on the project yet.
How I ended up solving this problem was adding the download of the nltk packages in an init function
import logging
import nltk
from nltk import word_tokenize, pos_tag
LOGGER = logging.getLogger(__name__)
LOGGER.info('Catching broad nltk errors')
DOWNLOAD_DIR = '/usr/lib/nltk_data'
LOGGER.info(f'Saving files to {DOWNLOAD_DIR} ')
try:
tokenized = word_tokenize('x')
LOGGER.info(f'Tokenized word: {tokenized}')
except Exception as err:
LOGGER.info(f'NLTK dependencies not downloaded: {err}')
try:
nltk.download('punkt', download_dir=DOWNLOAD_DIR)
except Exception as e:
LOGGER.info(f'Error occurred while downloading file: {e}')
try:
tagged_word = pos_tag(['x'])
LOGGER.info(f'Tagged word: {tagged_word}')
except Exception as err:
LOGGER.info(f'NLTK dependencies not downloaded: {err}')
try:
nltk.download('averaged_perceptron_tagger', download_dir=DOWNLOAD_DIR)
except Exception as e:
LOGGER.info(f'Error occurred while downloading file: {e}')
I realize that the amount of try catch expressions are not needed. I also specify the download dir because it seemed that if you do not do that it downloads and unzips 'tagger' to /usr/lib and the nltk does not look for the the files there.
This will download the files on every first run on a new pod and the files will persist until the pod dies.
The error was solved on a Kubernetes stateless set which means this can deal with non persistent applications like App Engine, but will not be the most efficient because it will need to be download every time the instance spins up.

Aws Elastic Beanstalk deploy Akka application

I have a simple akka Http Server:
import akka.http.scaladsl.model.{ContentTypes, HttpEntity}
import akka.http.scaladsl.server.HttpApp
import akka.http.scaladsl.server.Directives._
import akka.http.scaladsl.settings.ServerSettings
import com.typesafe.config.ConfigFactory
object MinimalHttpServer extends HttpApp {
def route =
pathPrefix("v1") {
path("subscribe" / Segment) { id =>
get {
complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, s"<h1>Hello $id from Akka Http!</h1>"))
} ~
post {
entity(as[String]) { entity =>
complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, s"<b>Thanks $id for posting your message <i>$entity</i></b>"))
}
}
}
}
}
object MinimalHttpServerApplication extends App {
MinimalHttpServer.startServer("localhost", 8088, ServerSettings(ConfigFactory.load))
}
I use Sbt native Packager to build an universal zip. When I deploy my application to Aws Elastic Beanstalk, I receive this error:
[Instance: i-0a846978718d54d76] Command failed on instance. Return code: 1 Output: (TRUNCATED)...xml_2.11-1.0.5.jar Unable to launch application as the source bundle does not contain either a file named application.jar or a Procfile. Unable to launch application as the source bundle does not contain either a file named application.jar or a Procfile. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/01_configure_application.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
Any Ideas? Thank You!
It appears AWS Elastic Beanstalk expects your .zip to contain either a file named application.jar or a Procfile, and the zip created by sbt-native-packager does not look like that.
It appears sbt-native-packager does not have support for the format Elastic Beanstalk expects, though GitHub issue 632 shows some work done in that direction.