I'm using unoconv to convert XLSX => PDF. I need LibreOffice to wrap text and increase row heights dynamically depending on the contents of the XLSX file. Is there a way to do this programmatically, maybe at the unoconv/soffice level?
The only way that worked for us was by patching unoconv so that it immediately performs the following actions after opening a document:
Select All
Trigger optimal row height for the selection
From python, it looks like this:
frame = document.CurrentController.Frame
dispatcher = self.unosvcmgr.createInstanceWithContext("com.sun.star.frame.DispatchHelper", self.context)
dispatcher.executeDispatch(frame, ".uno:SelectAll", "", 0, ())
dispatcher.executeDispatch(frame, ".uno:SetOptimalRowHeight", "", 0, UnoProps(aExtraHeight=0))
The patch file is here: https://gist.github.com/ldiqual/065aada05cfb50443bc67fc3ae99ea14
And this is how it is applied in Docker:
ENV UNO_URL https://raw.githubusercontent.com/unoconv/unoconv/master/unoconv
COPY ./unoconv.patch /tmp/unoconv.patch
RUN curl -Ls ${UNO_URL} -o /usr/local/bin/unoconv \
&& patch /usr/local/bin/unoconv /tmp/unoconv.patch \
&& chmod +x /usr/local/bin/unoconv \
&& rm /tmp/unoconv.patch \
I tryed to solve this problem here https://github.com/PasaOpasen/_excel_correct_cells
I think results are well, but it's not so easy
Related
I've built a Quarkus 2.7.1 console application using picocli that includes several subcommands. I'd like to be able to run this application within a Kubernetes cluster and decide its arguments at run-time. This is so that I can use the same container image to run the application in different modes within the cluster.
To get things started I added the JIB extension and tried setting the arguments using a configuration value quarkus.jib.jvm-arguments. Unfortunately it seems like this configuration value is locked at build-time so I'm unable to update this at run-time.
Next I tried setting quarkus.args while using default settings for JIB. The configuration value documentation makes it sound general enough for the job but it doesn't seem to have an affect when the application is run in the container. Since most references to this configuration value in documentation are in the context of Dev Mode I'm wondering if this may be disabled outside of that.
How can I get this application running in a container image with its arguments decided at run-time?
You can set quarkus.jib.jvm-entrypoint to any container entrypoint command you want, including scripts. An example in the doc is quarkus.jib.jvm-entrypoint=/deployments/run-java.sh. You could make use of $CLI_ARGUMENTS in such a script. Even something like quarkus.jib.jvm-entrypoint=/bin/sh,-c,'/deployments/run-java.sh $CLI_ARGUMENTS' should work too, as long as you place the script run-java.sh at /deployments in the image. The possibility is limitless.
Also see this SO answer if there's an issue. (The OP in the link put a customer script at src/main/jib/docker/run-java.sh (src/main/jib is Jib's default "extra files directory") so that Jib places the script in the image at /docker/run-java.sh.
I was able to find a solution to the problem with a bit of experimenting this morning.
With the quarkus-container-image-docker extension (instead of quarkus.jib.jvm-arguments) I was able to take the template Dockerfile.jvm and extend it to pass through arguments to the CLI. The only line that needed changing was the ENTRYPOINT (details included in the snippet below). I changed the ENTRYPOINT form (from exec to shell) and added an environment variable as an argument to pass-through program arguments.
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.3
ARG JAVA_PACKAGE=java-11-openjdk-headless
ARG RUN_JAVA_VERSION=1.3.8
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
# Install java and the run-java script
# Also set up permissions for user `1001`
RUN microdnf install curl ca-certificates ${JAVA_PACKAGE} \
&& microdnf update \
&& microdnf clean all \
&& mkdir /deployments \
&& chown 1001 /deployments \
&& chmod "g+rwX" /deployments \
&& chown 1001:root /deployments \
&& curl https://repo1.maven.org/maven2/io/fabric8/run-java-sh/${RUN_JAVA_VERSION}/run-java-sh-${RUN_JAVA_VERSION}-sh.sh -o /deployments/run-java.sh \
&& chown 1001 /deployments/run-java.sh \
&& chmod 540 /deployments/run-java.sh \
&& echo "securerandom.source=file:/dev/urandom" >> /etc/alternatives/jre/lib/security/java.security
# Configure the JAVA_OPTIONS, you can add -XshowSettings:vm to also display the heap size.
ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
# We make four distinct layers so if there are application changes the library layers can be re-used
COPY --chown=1001 target/quarkus-app/lib/ /deployments/lib/
COPY --chown=1001 target/quarkus-app/*.jar /deployments/
COPY --chown=1001 target/quarkus-app/app/ /deployments/app/
COPY --chown=1001 target/quarkus-app/quarkus/ /deployments/quarkus/
EXPOSE 8080
USER 1001
# [== BEFORE ==]
# ENTRYPOINT [ "/deployments/run-java.sh" ]
# [== AFTER ==]
ENTRYPOINT "/deployments/run-java.sh" $CLI_ARGUMENTS
I have tried the above approaches but they didn't work with the default quarkus JIB's ubi8/openjdk-17-runtime image. This is because this base image doesn't use /work as the WORKIR, but instead the /home/jboss.
Therefore, I created a custom start-up script and referenced it on the properties file as following. This approach works better if there's a need to set application params using environment variables:
File: application.properties
quarkus.jib.jvm-entrypoint=/bin/sh,run-java.sh
File: src/main/jib/home/jboss/run-java.sh
java \
-Djavax.net.ssl.trustStore=/deployments/truststore \
-Djavax.net.ssl.trustStorePassword="$TRUST_STORE_PASSWORD" \
-jar quarkus-run.jar
I have uploaded thousands of files to google storage, and i found out all the files miss content-type,so that my website cannot get it right.
i wonder if i can set some kind of policy like changing all the files content-type at the same time, for example, i have bunch of .html files inside the bucket
a/b/index.html
a/c/a.html
a/c/a/b.html
a/a.html
.
.
.
is that possible to set the content-type of all the .html files with one command in the different place?
You could do:
gsutil -m setmeta -h Content-Type:text/html gs://your-bucket/**.html
There's no a unique command to achieve the behavior you are looking for (one command to edit all the object's metadata) however, there's a command from gcloud to edit the metadata which you could use on a bash script to make a loop through all the objects inside the bucket.
1.- Option (1) is to use a the gcloud command "setmeta" on a bash script:
# kinda pseudo code here.
# get the list with all your object's names and iterate over the metadata edition command.
for OUTPUT in $(get_list_of_objects_names)
do
gsutil setmeta -h "[METADATA_KEY]:[METADATA_VALUE]" gs://[BUCKET_NAME]/[OBJECT_NAME]
# the "gs://[BUCKET_NAME]/[OBJECT_NAME]" would be your object name.
done
2.- You could also create a C++ script to achieve the same thing:
namespace gcs = google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string bucket_name, std::string object_name,
std::string key, std::string value) {
# you would need to find list all the objects, while on the loop, you can edit the metadata of the object.
for (auto&& object_metadata : client.ListObjects(bucket_name)) {
string bucket_name=object_metadata->bucket(), object_name=object_metadata->name();
StatusOr<gcs::ObjectMetadata> object_metadata =
client.GetObjectMetadata(bucket_name, object_name);
gcs::ObjectMetadata desired = *object_metadata;
desired.mutable_metadata().emplace(key, value);
StatusOr<gcs::ObjectMetadata> updated =
client.UpdateObject(bucket_name, object_name, desired,
gcs::Generation(object_metadata->generation()))
}
}
I am new to buildroot and working to build Linaro with buildroot ..I have multiple fragment kernel config files and specified that in buildroot defconfig.
I have specified a custom kernel patches directory with BR2_LINUX_PATCH_DIR .
I dont have some of the config flags not set which are supposed to be there in the .config files..so i suspect that the Patches are applied successfully..so i tried giving a non existing location as Linux Patch dir and it does not give any error..
Is there anything other than giving value to BR2_LINUX_PATCH_DIR and what should be the format of the dir structure...in buildroot manual it says it should be
Package_name/patch name..For linux what should be the package name? It should be the same with which linux dir is created.for example for me it is linux-custom
Plz suggest and guide me in this.
Thanks in Advance
The option is named BR2_LINUX_KERNEL_PATCH, there is nothing named BR2_LINUX_PATCH_DIR. It applies all patches listed in this option (if those are files), or all files named *.patch if what's given in this option is a directory. See the code in linux/linux.mk:
define LINUX_APPLY_LOCAL_PATCHES
for p in $(filter-out ftp://% http://% https://%,$(LINUX_PATCHES)) ; do \
if test -d $$p ; then \
$(APPLY_PATCHES) $(#D) $$p \*.patch || exit 1 ; \
else \
$(APPLY_PATCHES) $(#D) `dirname $$p` `basename $$p` || exit 1; \
fi \
done
endef
Also, I would recommend that you watch the output of Buildroot: it shows everything it is doing, especially it lists the patches it applied. Look at the line >>> linux .... Patching, which is the marker for the beginning of the patching step of the linux package.
I have an autotools-based BitBake recipe which I would like to have binaries installed in /usr/local/bin and libraries installed in /usr/local/lib (instead of /usr/bin and /usr/lib, which are the default target directories).
Here's a part of the autotools.bbclass file which I found important.
CONFIGUREOPTS = " --build=${BUILD_SYS} \
--host=${HOST_SYS} \
--target=${TARGET_SYS} \
--prefix=${prefix} \
--exec_prefix=${exec_prefix} \
--bindir=${bindir} \
--sbindir=${sbindir} \
--libexecdir=${libexecdir} \
--datadir=${datadir} \
--sysconfdir=${sysconfdir} \
--sharedstatedir=${sharedstatedir} \
--localstatedir=${localstatedir} \
--libdir=${libdir} \
...
I thought that the easiest way to accomplish what I wanted to do would be to simply change ${bindir} and ${libdir}, or perhaps change ${prefix} to /usr/local, but I haven't had any success in this area. Is there a way to change these installation variables, or am I thinking about this in the wrong way?
Update:
Strategy 1
As per Ross Burton's suggestion, I've tried adding the following to my recipe:
prefix="/usr/local"
exec_prefix="/usr/local"
but this causes the build to fail during that recipe's do_configure() task, and returns the following:
| checking for GLIB... no
| configure: error: Package requirements (glib-2.0 >= 2.12.3) were not met:
|
| No package 'glib-2.0' found
This package can be found during a normal build without these modified variables. I thought that adding the following line might allow the system to find the package metadata for glib:
PKG_CONFIG_PATH = " ${STAGING_DIR_HOST}/usr/lib/pkgconfig "
but this seems to have made no difference.
Strategy 2
I've also tried Ross Burton's other suggestion to add these variable assignments into my distribution's configuration file, but this causes it to fail during meta/recipes-extended/tzdata's do_install() task. It returns that DEFAULT_TIMEZONE is set to an invalid value. Here's the source of the error from tzdata_2015g.bb
# Install default timezone
if [ -e ${D}${datadir}/zoneinfo/${DEFAULT_TIMEZONE} ]; then
install -d ${D}${sysconfdir}
echo ${DEFAULT_TIMEZONE} > ${D}${sysconfdir}/timezone
ln -s ${datadir}/zoneinfo/${DEFAULT_TIMEZONE} ${D}${sysconfdir}/localtime
else
bberror "DEFAULT_TIMEZONE is set to an invalid value."
exit 1
fi
I'm assuming that I've got a problem with ${datadir}, which references ${prefix}.
Do you want to change paths for everything or just one recipe? Not sure why you'd want to change just one recipe to /usr/local, but whatever.
If you want to change all of them, then the simple way is to set prefix in your local.conf or distro configuration (prefix = "/usr/local").
If you want to do it in a particular recipe, then just assigning prefix="/usr/local" and exec_prefix="/usr/local" in the recipe will work.
These variables are defined in meta/conf/bitbake.conf, where you can see that bindir is $exec_prefix/bin, which is probably why assigning prefix didn't work for you.
Your first strategy was on the right track, but you were clobbering more than you wanted by changing only "prefix". If you look in sources/poky/meta/conf/bitbake.conf you'll find everything you are clobbering when you set the variable "prefix" to something other than "/usr" (like it was in my case). In order to modify only the install path with what would manually be the "--prefix" option to configure, I needed to set all the variables listed here in that recipe:
prefix="/your/install/path/here"
datadir="/usr/share"
sharedstatedir="/usr/com"
exec_prefix="/usr"
I've got c++ code that needs a sed done to it prior to compilation. How do I place this into Makefile.am?
I tried the typical makefile setup and the target appears to not exist:
gentest.cc:
$(SED) -i "s|FIND|REPLACE|" gentest.cc
If you are interested as to why I want to do this, it's because I wrote my program (slider3.py) in python and my partner wrote his in c++ (gentest.cc) and his needs to call mine. I'm accomplishing this by editing the argv and then using execv().
... {
char **argv2 = new char*[argc];
memset(argv2,0,sizeof(argv2));
argv2[0] = "__PREFIX__/bin/slider3.py";
memcpy(argv2 + 1, argv + 2, sizeof(char *) * (argc - 2));
int oranges = execv(argv2[0], argv2);
printf("%s\n", strerror(oranges));
return oranges;
} ...
I've already handled getting the #! added to slider3.py and chmod +x by using the method that was not working for gentest.cc. I've also handled adding slider3.py to the list of files that get installed.
EXTRA_DIST=testite.sh slider3_base.py
bin_SCRIPTS = slider3.py
CLEANFILES = $(bin_SCRIPTS)
slider3.py: slider3_base.py
rm -f slider3.py
echo "#! " $(PYTHON) > slider3.py
cat slider3_base.py >> slider3.py
chmod +x slider3.py
gentest is defined this way in Makefile.am:
bin_PROGRAMS = gentest
gentest_SOURCES = gentest.cc
gentest_LDADD = libgen.a #../libsbsat.la $(LIBM)
And this fails to be run during make (note the # pattern is successfully expanded in Makefile):
gentest.cc:
$(SED) -i "s|__PREFIX__|#prefix#|" gentest.cc
Any ideas on how to get sed to run before compiling gentest.cc?
Don't use in-place sed.
Instead:
gentest_SOURCES = gentest-seded.cc
gentest-seded.cc : gentest.cc
$(SED) "s|__PREFIX__|#prefix#|" $< >$#
Have you ever considered #define-ing it in config.h (you're using autotools, right?) or passing it using -D when compiling? This is really not the case for sed.
The details from Andrew Y's answer:
in your C++ source, specify:
argv2[0] = SCRIPTPREFIX "/bin/slider3.py";
then compile with
-DSCRIPTPREFIX='"/your/script/prefix"'
Have you considered calling the Python code directly from the C++? Here is a tutorial on using boost to call python functions from C++. The method you are describing here seems very brittle.