Problem:
I'm trying to write a webpack plugin to integrate a source-code generator into my webpack build. My complete scenario is complex, so I've broken it down into a simpler progression of questions.
Part I:
I have a code generator that generates a %.js file from a %.proto file. For example, with source files foo.proto and bar.proto, I want my plugin to produce the following compilation steps:
┌─────────┐
foo.proto ──┤ codegen ├──> foo.js
└─────────┘
┌─────────┐
bar.proto ──┤ codegen ├──> bar.js
└─────────┘
Where am I meant to register this dependency on each %.proto file (for file watching) and declare the produced assets (%.js) on the compilation object?
This scenario could be achieved with a loader by using require('codegen!foo.proto'), but by Part III you'll see why loaders won't be appropriate.
My intention would be expressed in make as:
%.js: %.proto
codegen $^ $#
Part II:
The generated %.js files emitted by my generator are now in ES6 syntax, so need to be transpiled to ES5. I already have babel-loader configured for transpilation of ES6 source, if that's helpful. Continuing the example, the steps would be:
┌─────────┐ ┌───────┐
foo.proto ──┤ codegen ├──┤ babel ├──> foo.js
└─────────┘ └───────┘
┌─────────┐ ┌───────┐
bar.proto ──┤ codegen ├──┤ babel ├──> bar.js
└─────────┘ └───────┘
i.e., I want:
%.js: %.proto
codegen $^ | babel -o $#
Should I:
be doing the transpilation within my plugin task, hiding it from the webpack compilation?
be getting webpack to do the transpilation via creating additional tasks on the compilation object?
be emitting the generated js in a manner that will allow webpack to transform it through the appropriate loader pipeline it's already using for other source?
Part III:
My generator now takes an additional input file of %.fragment.js. How can I express this dependency on the webpack compilation, such that file watching will rebuild the assets when either %.proto or %.fragment.js is changed? This multi-source dependency is why I don't think loaders are an appropriate direction to head in.
┌─────────┐ ┌───────┐
foo.proto ──┤ codegen ├──┤ babel ├──> foo.js
foo.fragment.js ──┤ │ │ │
└─────────┘ └───────┘
┌─────────┐ ┌───────┐
bar.proto ──┤ codegen ├──┤ babel ├──> bar.js
bar.fragment.js ──┤ │ │ │
└─────────┘ └───────┘
My intention is:
%.js: %.proto %.fragment.js
codegen $^ | babel -o $#
In this post, I saw a mention of "child compilations". Is there any webpack documentation of what those are or how they're intended to be used?
Or, is this kind of scenario not what webpack is intended to support, even via custom plugins?
Your problem can be solved with loaders. I recommend to read the guidelines before work.
First by prioprity is [loader] do only a single task. So, your loader for proto files will just generate ES6 js file .
Q: Where am I meant to register this dependency on each %.proto file (for file watching) and declare the produced assets (%.js) on the compilation object?
A: You should require your proto files in common way (as you described):
require("foo.proto");
and producing additional assets with emitFile function:
emitFile(name: string, content: Buffer|String, sourceMap: {...})
Q: Should I emitting the generated js in a manner that will allow webpack to transform it through the appropriate loader pipeline it's already using for other source?
A: Yep, your loader must do only a single task: generate ES6 js file from proto file. And then resulting file will be passed to babel:
{test: /\.proto$/, loader: 'babel-loader!proto-loader'}
Q: My generator now takes an additional input file of %.fragment.js. How can I express this dependency on the webpack compilation, such that file watching will rebuild the assets when either %.proto or %.fragment.js is changed?
A: You must mark dependencies with addDependency function (example from docs):
// Loader adding a header
var path = require("path");
module.exports = function(source) {
this.cacheable();
var callback = this.async();
var headerPath = path.resolve("header.js");
this.addDependency(headerPath);
fs.readFile(headerPath, "utf-8", function(err, header) {
if(err) return callback(err);
callback(null, header + "\n" + source);
});
};
Related
Problem statement
When building a Python package I want the build tool to automatically execute the steps to generate the necessary Python files and include them in the package.
Here are some details about the project:
the project repository contains only the hand-written Python and YAML files
to have a fully functional package the YAML files must be compiled into Python scripts
once the Python files are generated from YAMLs, the program needed to compile them is no longer necessary (build dependency).
the hand-written and generated Python files are then packaged together.
The package would then be uploaded to PyPI.
I want to achieve the following:
When the user installs the package from PyPI, all necessary files required for the package to function are included and it is not necessary to perform any compile steps
When the user checks-out the repository and builds the package with python -m build . --wheel, the YAML files are automatically compiled into Python and included in the package. Compiler is required.
When the user checks-out the repository and installs the package from source, the YAML files are automatically compiled into Python and installed. Compiler is required.
(nice to have) When the user checks-out the repository and installs in editable mode, the YAML files are compiled into Python. The user is free to make modifications to both generated and hand-written Python files. Compiler is required.
I have a repository with the following layout:
├── <project>
│ └── <project>
│ ├── __init__.py
│ ├── hand_written.py
│ └── specs
│ └── file.ksc (YAML file)
└── pyproject.toml
And the functional package should look something like this
├── <project>
│ └── <project>
│ ├── __init__.py
│ ├── hand_written.py
│ └── generated
│ └── file.py
├── pyproject.toml
└── <other package metadata>
How can I achieve those goals?
What I have so far
As I am very fresh to Python packaging, I have been struggling to understand the relations between the pyproject.toml, setup.cfg and setup.py and how I can use them to achieve the goals I have outlined above. So far I have a pyproject.toml with the following content:
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[project]
name = "<package>"
version = "xyz"
description = "<description>"
authors = [ <authors> ]
dependencies = [
"kaitaistruct",
]
From reading the setuptools documentation, I understand that there are the build commands, such as:
build_py -- simply copies Python files into the package (no compiling; works differently in editable mode)
build_ext -- builds C/C++ modules (not relevant here?)
I suppose adding the compile steps for the YAML files will involve writing a setup.py file and overwriting a command, but I don't know if this is the right approach, whether it will even work, or if there are better methods, such as using a different build backend.
Alternative approaches
A possible alternative approach would be to manually compile the YAML files prior to starting the installation or build of the package.
My flutter project depends on several local flutter and dart packages to keep things separated and clean.
My folder structure is like this:
main-flutter-project
│ lib
| test
│ pubspec.yaml
│
└── local-packages
│ └── dart-package-1
│ │ pubspec.yaml
│ │
│ └── flutter-package-1
│ │ pubspec.yaml
│ │
│ └── flutter-package-2
│ pubspec.yaml
...
Each local package is self contained and can be maintained without touching the main project.
This structure means that I have many pubspec.yaml files where I have to keep the dependencies updated.
When I use e.g. the bloc libaray bloc: ^7.2.1 in say 5 packages, I have to update the version in each pubspec file separately when a new version is released.
Is there a possibility to specify those shared package dependency versions in only one place where the other pubspec.yaml files refer to?
I've seen this e.g. with Maven where you can specify a property <junit.version>4.12</junit.version> and access it from somewhere else <version>${junit.version}</version>.
We were solving a similar problem.
AFAIK, there's no built-in or recommended way to do this, so we were inventing some hacks.
In our case, we have core package that has some shared functionality and common dependencies, if you don't have it, you can still create an artificial one, let's say, shared_dependencies package, and specify all the shared dependencies there.
Now, let's say, package foo depends on shared_dependencies package, and there's dependency bar defined in shared_dependecies package that foo needs to use. There are some ways to do that:
Import dependency directly. Since foo depends transitively on bar, you can just write import package:bar/bar.dart and it will work. It's not the best way though:
it's bad practice to import transitive dependency (there's even a linter rule for that);
auto-import won't work;
Export package in shared_dependencies package. I.e. shared_dependencies.dart can contain the following lines:
export 'package:bar/bar.dart'
That means that in your foo package you can just write import 'shared_dependencies/shared_dependencies.dart' and get access to bar content.
Pros:
auto-imports work.
Contras:
if you export several packages, there can be name conflicts (you'll have to hide some names in export);
if foo package depends on one bar package only, it could be weird to import all shared_dependencies.
Export in separate libraries of shared_dependencies package. You can group some related packages together in different files, e.g.:
bar.dart:
export 'package:bar/bar.dart'
bloc.dart:
export 'package:bloc_concurrency/bloc_concurrency.dart';
export 'package:flutter_bloc/flutter_bloc.dart';
In that case, if you need bar package in foo, you write import 'package:shared_dependencies/bar.dart'; if you need bloc, you write import 'package:shared_dependencies/bloc.dart'. Auto-imports work as well.
Add direct dependency to foo package, but don't specify version constraints:
bar:
This basically means that you need any bar package, but since foo also depends on shared_dependencies, its constraints will be taken into account. This may be needed if you're using some executables from bar package, as there's a limitation in Dart SDK that doesn't allow to run executables in transitive dependencies.
In our project, we ended up using 2 for the most commonly used packages, 3 for other packages, and 4 for packages with executables that we need to run.
To use Flowjs in VSCode you are supposed to install the Flow Language Support extension and disable the normal JS/TS support either by adding
"javascript.validate.enable": false
to your projects' VSCode settings.json file or disable the JS/TS-features completely.
I have a multi-root workspace with different project-roots that use JS, TypeScript or FlowJS so I can't disable the JS/TS stuff completely. But disabling JS validation via the folder's vscode settings give me the error/message:
This setting cannot be applied in this workspace. It will be applied when you open the containing workspace folder directly.
And it is not working. None of the Flow-features are working and VSCode complains about things like this:
- 'import type' declarations can only be used in TypeScript files.
- Type aliases can only be used in TypeScript files.
- Type annotations can only be used in TypeScript files.
- ...
How can I get FlowJS + VSCode working when I'm using a multi-root workspace?
Example project:
Project
├──Root A (plain old JS)
│ └───.vscode
│ └─── settings.json
│
├──Root B (FlowJS)
│ └───.vscode
│ │ └─── settings.json // "javascript.validate.enable": false
│ └─── test.js // error: Type annotations can only be used in TypeScript files.ts(8010)
│
├──Root C (TypeScript)
│ └───.vscode
│ └─── settings.json
│
└─── example.code-workspace
Visual Studio Code says:
A Cargo.toml file must be at the root of the workspace in order to
support all features
However I did not find what should be in Cargo.toml file located in the workspace root. Is it common for all project subdirectories?
I have read the chapter Hello, Cargo! of the documentation, but it only speaks about the Cargo.toml files within the project directory.
By experimenting, I have found out that the file with only one line [workspace] makes the VS Code note go away, but now every time I set up a new project it nags me about the fact that this project is not mentioned in the members array within this "workspace" Cargo.toml
Visual Studio Code directory structure is as follows
workspace
|
---> project1
|
---> project2
the cargo new project3 generates Cargo.toml within newly created project3 directory, but Visual Studio Code expects another Cargo.toml within the workspace directory itself.
This is covered in chapter 14 of the book, section 3. The Cargo.toml at the root of a Cargo workspace should explicitly contain its member projects in the members property. Note that this is exactly what the IDE was advising you to do.
[workspace]
members = [
"project1",
"project2",
]
Quoting:
Next, in the add directory, we create the Cargo.toml file that will configure the entire workspace. This file won’t have a [package] section or the metadata we’ve seen in other Cargo.toml files. Instead, it will start with a [workspace] section that will allow us to add members to the workspace by specifying the path to our binary crate; in this case, that path is adder:
Filename: Cargo.toml
[workspace]
members = [
"adder",
]
Next, we’ll create the adder binary crate by running cargo new within the add directory:
$ cargo new --bin adder
Created binary (application) adder project
At this point, we can build the workspace by running cargo build. The files in your add directory should look like this:
├── Cargo.lock
├── Cargo.toml
├── adder
│ ├── Cargo.toml
│ └── src
│ └── main.rs
└── target
Another example of this in the wild is serde (Cargo.toml).
The Cargo documentation provides additional details on the members field, including that path dependencies are included automatically.
The root crate of a workspace, indicated by the presence of [workspace] in its manifest, is responsible for defining the entire workspace. All path dependencies residing in the workspace directory become members. You can add additional packages to the workspace by listing them in the members key. Note that members of the workspaces listed explicitly will also have their path dependencies included in the workspace. [...]
In this case, no path dependencies nor members were stated in the root Cargo project, leading to the sub-directories not being regarded as workspace members.
As a workaround, I was able to create a top-level Cargo.toml with the following content:
[workspace]
members = [
"./*/src/..",
]
With this, I can create new projects under the workspace without explicitly updating the top-level Cargo.html.
As a note, the more obvious globs like "*", "./*" and "*/" do not work because the resulting matches must be directories with Cargo.toml files, and these globs match more than that (including "./target/", for example). The path I came up with results in the right subset (at least in the basic, typical case).
I never used CMake before so please excuse me.
My project has a "unity" folder that contains version 2.3.0 of the unit test library). unity_fixture.h contains "#define TEST(..." which is used like the following:
#include "unity_fixture.h"
...
TEST(xformatc, Simple) {
char output[20];
TEST_ASSERT_EQUAL(13, testFormat(output, "Hello world\r\n"));
TEST_ASSERT_EQUAL_STRING("Hello world\r\n", output);
}
I added "include_directories(${CMAKE_SOURCE_DIR}/unity)" to my CMakeLists.txt file. Still CLion does not find the declaration of TEST and I get tons of errors.
I did try to add all the unity files with set(SOURCE_FILES unity/unity_fixture.h..." but this did not work either.
edit 08.09.2015
I found out something strange. If I call cmake from command line it creates a file "DependInfo.cmake" with the following contents:
# The set of languages for which implicit dependencies are needed:
set(CMAKE_DEPENDS_LANGUAGES
)
# The set of files for implicit dependencies of each language:
# Targets to which this target links.
set(CMAKE_TARGET_LINKED_INFO_FILES
)
# The include file search paths:
set(CMAKE_C_TARGET_INCLUDE_PATH
"unity"
"cmsis/inc"
"freertos/inc"
)
set(CMAKE_CXX_TARGET_INCLUDE_PATH ${CMAKE_C_TARGET_INCLUDE_PATH})
set(CMAKE_Fortran_TARGET_INCLUDE_PATH ${CMAKE_C_TARGET_INCLUDE_PATH})
set(CMAKE_ASM_TARGET_INCLUDE_PATH ${CMAKE_C_TARGET_INCLUDE_PATH})
The CMAKE_C_TARGET_INCLUDE_PATH stuff is missing in the file that is created by CLion. I believe that is the reason why it does not find the headers. Question is how do I tell CLion to create the CMAKE_C_TARGET_INCLUDE_PATH stuff?
I assume that your project structure is :
project root
├── CMakeLists.txt
├── Some source files
└── unity
└── unity_fixture.h
If you use CMake to include files :
set(INCLUDE_DIR ./unity)
include_directories(${INCLUDE_DIR})
Your include must be : #include <unity_fixture.h>
Or you can use without using CMake to include directories : #include "unity/unity_fixture.h"