Is there a standard way to determine what features are available for a given crate?
I'm trying to read Postgres timezones, and this says to use the crate postgres = "0.17.0-alpha.1" crate's with-time or with-chrono features.
When I try this in my Cargo.toml:
[dependencies]
postgres = { version = "0.17.0-alpha.1", features = ["with-time"] }
I get this error:
error: failed to select a version for `postgres`.
... required by package `mypackage v0.1.0 (/Users/me/repos/mypackage)`
versions that meet the requirements `^0.17.0-alpha.1` are: 0.17.0, 0.17.0-alpha.2, 0.17.0-alpha.1
the package `mypackage` depends on `postgres`, with features: `with-time` but `postgres` does not have these features.
Furthermore, the crate page for postgres 0.17.0 says nothing about these features, so I don't even know if they're supposed to be supported or not.
It seems like there would be something on docs.rs about it?
Crates uploaded to crates.io (and thus to docs.rs) will show what feature flags exist. crates.io issue #465 suggests placing the feature list on the crate's page as well.
Beyond that, the only guaranteed way to see what features are available is to look at the Cargo.toml for the crate. This generally means that you need to navigate to the project's repository, find the correct file for the version you are interested in, and read it.
You are primarily looking for the [features] section, but also for any dependencies that are marked as optional = true, as optional dependencies count as an implicit feature flag.
Unfortunately, there's no requirement that the crate author puts any documentation about what each feature flag does. Good crates will document their feature flags in one or more locations:
As comments in Cargo.toml
Their README
Their documentation
See also:
How do you enable a Rust "crate feature"?
For the postgres crate, we can start at crates.io, then click "repository" to go to the repository. We then find the right tag (postgres-v0.17.0), then read the Cargo.toml:
[features]
with-bit-vec-0_6 = ["tokio-postgres/with-bit-vec-0_6"]
with-chrono-0_4 = ["tokio-postgres/with-chrono-0_4"]
with-eui48-0_4 = ["tokio-postgres/with-eui48-0_4"]
with-geo-types-0_4 = ["tokio-postgres/with-geo-types-0_4"]
with-serde_json-1 = ["tokio-postgres/with-serde_json-1"]
with-uuid-0_8 = ["tokio-postgres/with-uuid-0_8"]
You can use the cargo-feature crate to view and manage your dependencies' features:
> cargo install cargo-feature --locked
> cargo feature postgres
Avaliable features for `postgres`
array-impls = ["tokio-postgres/array-impls"]
with-bit-vec-0_6 = ["tokio-postgres/with-bit-vec-0_6"]
with-chrono-0_4 = ["tokio-postgres/with-chrono-0_4"]
with-eui48-0_4 = ["tokio-postgres/with-eui48-0_4"]
with-eui48-1 = ["tokio-postgres/with-eui48-1"]
with-geo-types-0_6 = ["tokio-postgres/with-geo-types-0_6"]
with-geo-types-0_7 = ["tokio-postgres/with-geo-types-0_7"]
with-serde_json-1 = ["tokio-postgres/with-serde_json-1"]
with-smol_str-01 = ["tokio-postgres/with-smol_str-01"]
with-time-0_2 = ["tokio-postgres/with-time-0_2"]
with-time-0_3 = ["tokio-postgres/with-time-0_3"]
with-uuid-0_8 = ["tokio-postgres/with-uuid-0_8"]
with-uuid-1 = ["tokio-postgres/with-uuid-1"]
Note: this only works if the crate is already in Cargo.toml as a dependency.
The list of features are also displayed when using cargo add:
> cargo add postgres
Updating crates.io index
Adding postgres v0.19.4 to dependencies.
Features:
- array-impls
- with-bit-vec-0_6
- with-chrono-0_4
- with-eui48-0_4
- with-eui48-1
- with-geo-types-0_6
- with-geo-types-0_7
- with-serde_json-1
- with-smol_str-01
- with-time-0_2
- with-time-0_3
- with-uuid-0_8
- with-uuid-1
It seems like there would be something on docs.rs about it?
Other people thought so too! This was added to docs.rs in late 2020. You can view the list of features from the crate's documentation by navigating to "Feature flags" on the top bar:
You still won't see the list of features of postgres 0.17.0 though since old documentation isn't regenerated when new functionality is added to the site, but any recently published versions will have it available.
Note: this is only available on docs.rs and not generated when running cargo doc.
The feature-set of a dependency can be retrieved programmatically via the cargo-metadata crate:
use cargo_metadata::MetadataCommand;
fn main() {
let metadata = MetadataCommand::new()
.exec()
.expect("failed to get metadata");
let dependency = metadata.packages
.iter()
.find(|package| package.name == "postgres")
.expect("failed to find dependency");
println!("{:#?}", dependency.features);
}
{
"with-time-0_3": [
"tokio-postgres/with-time-0_3",
],
"with-smol_str-01": [
"tokio-postgres/with-smol_str-01",
],
"with-chrono-0_4": [
"tokio-postgres/with-chrono-0_4",
],
"with-uuid-0_8": [
"tokio-postgres/with-uuid-0_8",
],
"with-uuid-1": [
"tokio-postgres/with-uuid-1",
],
"array-impls": [
"tokio-postgres/array-impls",
],
"with-eui48-1": [
"tokio-postgres/with-eui48-1",
],
"with-bit-vec-0_6": [
"tokio-postgres/with-bit-vec-0_6",
],
"with-geo-types-0_6": [
"tokio-postgres/with-geo-types-0_6",
],
"with-geo-types-0_7": [
"tokio-postgres/with-geo-types-0_7",
],
"with-eui48-0_4": [
"tokio-postgres/with-eui48-0_4",
],
"with-serde_json-1": [
"tokio-postgres/with-serde_json-1",
],
"with-time-0_2": [
"tokio-postgres/with-time-0_2",
],
}
This is how other tools like cargo add and cargo feature provide dependency and feature information.
Related
I'm looking into integrating a validation framework to an existing PySpark project. There are a lot of examples how to configure Great Expectations using JSON/YAML files in official documentation. However, in my case table schemas are defined as Python classes and I'm aiming to keep the validation definitions in these classes.
When playing around, I noticed this kind of pattern can be used to validate single expectations without any config files:
spark = SparkSession.builder.master("local[*]").getOrCreate()
df = spark.createDataFrame([
Row(x=1, y="foo"),
Row(x=2, y=None),
])
ds = SparkDFDataset(df)
expectation: ExpectationValidationResult = ds.expect_column_values_to_not_be_null("y")
print(expectation.success)
where expectation.success is either False or True. However, I'm aiming to build expectation suites and generating reports using programmatic configuration but can't find any references how to do it. This is what I tried to hack but it leads to a runtime exception:
ds.append_expectation(ExpectationConfiguration(
expectation_type="expect_column_values_to_not_be_null",
kwargs={'column': 'y', 'result_format': 'BASIC'},
))
engine = SparkDFExecutionEngine(
force_reuse_spark_context=True,
)
validator = Validator(
execution_engine=engine,
expectation_suite=ds.get_expectation_suite(),
)
res = validator.validate()
Any pointers on how to configure Great Expectations without config files (or minimal files) are highly appreciated!
Batches are required for validating expectation suites. Here is a working example:
spark = SparkSession.builder.master("local[*]").getOrCreate()
df = spark.createDataFrame([
Row(x=1, y="foo"),
Row(x=2, y=None),
])
engine = SparkDFExecutionEngine(
force_reuse_spark_context=True,
)
validator = Validator(
execution_engine=engine,
expectation_suite=ExpectationSuite(
expectation_suite_name="my_suite",
expectations=[
ExpectationConfiguration(
expectation_type="expect_column_values_to_not_be_null",
kwargs={"column": "y", "result_format": "BASIC"},
),
ExpectationConfiguration(
expectation_type="expect_column_values_to_not_be_null",
kwargs={"column": "x", "result_format": "BASIC"},
)
]
),
batches=[
Batch(
data=df,
batch_definition=BatchDefinition(
datasource_name="foo",
data_connector_name="foo",
data_asset_name="foo",
batch_identifiers=IDDict(ge_batch_id=str(uuid.uuid1())),
),
),
],
)
res = validator.validate()
I'm working on switching OpenAPI v2 to Open API 3, we'll be using MicroProfile 2.0 with Projects as the plugin that will generate the OAS from the code.
Now, we have different headers that are needed as a kind of authorization. I know I can put them on each resource, that seems just a lot of repetition, so I thought it was a good idea to put in the JaxRSConfiguration file as a #SecurityScheme and add them as security to the #OpenApiDefinition.
#OpenAPIDefinition(
info = #Info(
...
)
),
security = {
#SecurityRequirement(name = Header1),
#SecurityRequirement(name = Header2)
},
components = #Components(
securitySchemes = {
#SecurityScheme( apiKeyName = Header1 ... ) ,
#SecurityScheme( apiKeyName = Header2 ....)
}),
...
)
This is working, however it generates an OR, while it should be an AND (without the - )
security:
- Header1: []
- Header2: []
I thought I could use #SecurityRequirementsSet inside security in the #OpenApiDefinition, but unfortunately this is not allowed. I know I can use it on each call or on top of the class of the different calls but as this is still a sort of repetition, I would prefer to have it as a general authentication security.
Does anybody know how I can achieve this ?
I'm fairly new to rust and have been following the official book that they provide on their site. During chapter 2 they tell you to import a "Rand" cargo which I did. However, when I try to run my code directly through VS Code I get an error saying "unresolved import rand". When I run it through command prompt, everything works fine. I've already tried every solution suggested here: https://github.com/rust-lang/rls-vscode/issues/513 and nothing seemed to have worked. Extensions that I'm using:
Better TOML
Cargo
Code Runner
Rust (rls)
Rust Assist
vsc-rustfmt
vscode-rust-syntax
Has anyone else ran into a similar problem or a know a solution? Thank you!
Edit: My Cargo.TOML looks like this:
[package]
name = "guessing_game"
version = "0.1.0"
authors = ["Name <MyNameHere#gmail.com>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
rand = "0.6.0"
Edit 2: my main.rs file looks like this:
use rand::Rng;
use std::io;
use std::cmp::Ordering;
fn main()
{
println!("Guess the number!");
let secret_number = rand::thread_rng().gen_range(1, 101);
loop
{
println!("Please input your guess!");
let mut guess = String::new();
io::stdin().read_line(&mut guess).expect("Failed to read line!");
let guess: u32 = match guess.trim().parse()
{
Ok(num) => num,
Err(_) => continue,
};
println!("Your guess {}", guess);
match guess.cmp(&secret_number)
{
Ordering::Less => println!("Too small!"),
Ordering::Greater => println!("Too big!"),
Ordering::Equal =>
{
println!("You win!");
break;
}
}
}
}
Got a fix!
In VSC, select Extensions, select the Code Runner extension, click the little gear symbol and select Extension Settings. It's the Code-runner: Executor Map setting that needs to be changed. Click the 'Edit in settings.json' link.
Add the following to the file:
"code-runner.executorMap": {
"rust": "cargo run # $fileName"
}
If you already have content in the settings.json file then remember to add a comma to the line above and put your edit inside the outermost curly braces, e.g.
{
"breadcrumbs.enabled": true,
"code-runner.clearPreviousOutput": true,
"code-runner.executorMap": {
"rust": "cargo run # $fileName"
}
}
This tells Code Runner to use the 'cargo run' command, instead of 'rustc'
This fix came from this question on stackoverflow.
Context - I am trying out Postgres' Geographic Information System extension PostGis that enables stories latitude and longitudes as Point and operations on it.
If I understand correctly then I need to add a custom converter that can convert the point between JOOQ and PostGis and add it to the gradle file.
Problem - When I generate the jooq-code, few files are generated incorrectly and have the fields defined twice which fail compilation. These are:
<configured-generation-dir>/tables/StValuecount.java
<configured-generation-dir>/tables/records/StValuecountRecord.java
<configured-generation-dir>/tables/records/StValuepercentRecord.java
<configured-generation-dir>/tables/_StValuecount.java
<configured-generation-dir>/tables/records/_StValuecountRecord.java
<configured-generation-dir>/tables/_StHistogram.java
<configured-generation-dir>/tables/records/_StHistogramRecord.java
<configured-generation-dir>/tables/_StQuantile.java
Gradle config =>
jooq{
myAwesomeApp(sourceSets.main){
logging = 'WARN'
jdbc {
driver = 'org.postgresql.Driver'
url = db_url
user = db_user
password = db_password
}
generator {
name = 'org.jooq.codegen.DefaultGenerator'
strategy {
name = 'org.jooq.codegen.DefaultGeneratorStrategy'
}
database {
name = 'org.jooq.meta.postgres.PostgresDatabase'
inputSchema = 'public'
forcedTypes {
forcedType {
userType = 'org.postgis.Point'
converter = 'com.example.JooqBreaksWithPostGis.jooq.converters.PostgresPointJooqConverter'
expression = '.*\\.point'
types = '.*'
}
}
}
generate {
routines = false
relations = true
deprecated = false
records = true
immutablePojos = false
fluentSetters = true
}
target {
packageName = 'jooq.fancy.app'
directory = 'src/main/java/generated'
}
}
}
}
What am I doing wrong?
I have also created a minimal project where I have reproduced the problem in case someone wants to quickly try it.
Steps to reproduce
Checkout project
git clone git#github.com:raj-saxena/JooqBreaksWithPostGis.git
Go to the project directory and start postgis docker container with
docker-compose up
Similarly, to remove postgis docker container run
docker-compose down
Run migrations that add a simple City table containing Point type with
./gradlew flywayMigrate
I have added few rows in a second migration to verify if the DB structure was working. Details to connect to Postgres instance in the build.gradle file.
Generate jooq files with
./gradlew generateMyAwesomeAppJooqSchemaSource
Verify that the files are generated in the configured src/main/java/generated directory.
Verify that the files mentioned above fail to compile.
Taking Lukas' advice, I added the exclude configuration to the jooq config as below:
database {
name = 'org.jooq.meta.postgres.PostgresDatabase'
...
excludes = '.*ST_ValueCount' +
'|.*St_Valuepercent' +
'|.*St_Histogram' +
'|.*St_Quantile' +
'|.*St_Approxhistogram' +
'|.*St_PixelOfValue' +
'|.*St_Approxquantile' +
'|.*ST_Tile'
}
This allowed the code to compile.
This sounds a lot like https://github.com/jOOQ/jOOQ/issues/4055. jOOQ 3.11 currently cannot handle overloaded table valued functions in any RDBMS that supports table valued functions. Your best option here is to exclude all the affected functions from the code generation, using <excludes>:
https://www.jooq.org/doc/latest/manual/code-generation/codegen-advanced/codegen-config-database/codegen-database-includes-excludes/
Thanks to this question, I stared looking at Wuff to help with a Gradle build (converting an Eclipse plugin).
This is probably such a newbie question, so I do apologize in advance, but I couldn't find the answer anywhere:
We're currently using Eclipse 4.3.1. So, I followed the wiki page and changed the version:
wuff{
selectedEclipseVersion = '4.3.1'
eclipseVersion('4.3.1') {
}
}
Which seems to work. However, the default mirror site does not contain that version anymore, so I'm a fileNotFoundException error (for eclipse-SDK-4.3.1-linux-gtk-x86_64.tar.gz).
Now, I'm guessing it should have automatically gone to the archive site, but for some reason it does not.
I tried fiddling around with the eclipseMirror extension (since changing extra properties is now disabled by Gradle):
wuff.ext.'eclipseMirror' = 'http://archive.eclipse.org'
but to no avail.
Any insight would be appreciated.
Using the same version name just overrides the exiting properties, it does not delete the rest, which was the problem (thanks to Andrey Hihlovskiy for pointing it out!). I wrote the following workaround:
selectedEclipseVersion = '4.3.1-mine'
...
eclipseVersion('4.3.1-mine'){
extendsFrom '4.2.2'
eclipseMavenGroup = 'eclipse-kepler-sr1'
eclipseMirror = 'http://mirror.netcologne.de'
eclipseArchiveMirror = 'http://archive.eclipse.org'
def suffix_os = [ 'linux': 'linux-gtk', 'macosx': 'macosx-cocoa', 'windows': 'win32' ]
def suffix_arch = [ 'x86_32': '', 'x86_64': '-x86_64' ]
def fileExt_os = [ 'linux': 'tar.gz', 'macosx': 'tar.gz', 'windows': 'zip' ]
def current_os = //your os
def current_arch = //your arch
sources {
source "$eclipseMirror/eclipse//technology/epp/downloads/release/kepler/SR1/eclipse-jee-kepler-SR1-${suffix_os[current_os]}${suffix_arch[current_arch]}.${fileExt_os[current_os]}"
source "$eclipseMirror/eclipse//technology/epp/downloads/release/kepler/SR1/eclipse-rcp-kepler-SR1-${suffix_os[current_os]}${suffix_arch[current_arch]}.${fileExt_os[current_os]}", sourcesOnly: true
languagePackTemplate '${eclipseMirror}/eclipse//technology/babel/babel_language_packs/R0.11.1/kepler/BabelLanguagePack-eclipse-${language}_4.3.0.v20131123020001.zip'
}