No function or associated item named `table` found for struct `schema::todos::table` in the current scope - postgresql

I want to create a function that uses Diesel to read all the data in postgresql and return it as a Vec.
Error message
I believe this error message indicates that todos does not implement the table function. I don't think it's a problem because I have #[derive(Queryable)] in the model.
error[E0599]: no function or associated item named `table` found for struct `schema::todos::table` in the current scope
--> src/actions.rs:9:42
|
9 | let entries: Vec<TodoEntry> = todos::table.load::<TodoEntry>(&connection).expect("Error loading todos");
| ^^^^^ function or associated item not found in `schema::todos::table`
|
::: src/schema.rs:1:1
|
1 | / table! {
2 | | todos (id) {
3 | | id -> Int4,
4 | | body -> Text,
5 | | }
6 | | }
| |_- function or associated item `table` not found for this
|
= help: items from traits can only be used if the trait is in scope
= note: the following trait is implemented but not in scope; perhaps add a `use` for it:
`use crate::diesel::associations::HasTable;`
The relevant code
Cargo.toml
[package]
authors = ["hogehogehoge"]
edition = "2018"
name = "todo"
version = "0.1.0"
[dependencies]
actix-web = "3"
askama = "*"
thiserror = "*"
diesel = { version = "1.4.4", features = ["postgres", "r2d2"] }
r2d2 = "0.8"
dotenv = "0.15.0"
diesel.toml
[print_schema]
file = "src/schema.rs"
actions.rs
This file has a function that returns the data registered in the db as a Vec.
use diesel::pg::PgConnection;
use crate::models;
use crate::templates::TodoEntry;
pub fn return_all(connection: &PgConnection) -> Result<Vec<TodoEntry>, diesel::result::Error> {
use crate::schema::todos::dsl::*;
let entries: Vec<TodoEntry> = todos::table.load::<TodoEntry>(&connection).expect("Error loading todos");
Ok(entries)
}
templates.rs
This file determines the data type to register in the db and then add the
#[derive(Queryable)]
pub struct Todo {
pub id: u32,
pub text: String,
}
schema.rs
table! {
todos (id) {
id -> Int4,
body -> Text,
}
}

I think you've mixed up two different ways to refer to a specific table. According to the corresponding guide it's either crate::schema::table_name::table or crate::schema::table_name::dsl::table_name. You try to use crate::schema::table_name::dsl::table_name::table.

Related

How to collect the list of differnces of file between local changes and Head changes in a Map using JGIT Library?

I need a list of file differences between a local (worktree) and the Head
eg
public void localDiffAndHead(Git git) throws IOException {
try (DiffFormatter diffFormatter = new DiffFormatter(DisabledOutputStream.INSTANCE)) {
diffFormatter.setRepository(git.getRepository());
String lastCommit = getLastCommitOfCurrentBranch(git).get().getName();
AbstractTreeIterator commitTreeIterator = prepareTreeParser( git.getRepository(), lastCommit );
FileTreeIterator workTreeIterator = new FileTreeIterator( git.getRepository() );
List<DiffEntry> diffEntries = diffFormatter.scan(workTreeIterator, commitTreeIterator);
for(DiffEntry diffEntry: diffEntries) {
if (diffEntry.getOldPath().contains("/ui.xml")) {
FileHeader fileHeader = diffFormatter.toFileHeader(diffEntry);
ListIterator<Edit> edits = fileHeader.toEditList().listIterator();
while(edits.hasNext()) {
Edit edit = edits.next();
System.out.println(edit.getType());
System.out.println(edit.getBeginA());
System.out.println(edit.getEndA());
System.out.println(edit.getBeginB());
System.out.println(edit.getEndB());
}
}
}
System.out.println();
}
}
My above code presenting the below output for my selected file "ui.xml".
Replace
0
1
0
1
but my expected result should be like below
Type | content
Replace |xyz
Delete | abc
Insert | bbc

Using terraform to fetch entity name under alias

I am trying to fetch all the entity names using data source vault_identity_entity, however unable to fetch the name of entity located under aliases.
Sample code:
'''
data “vault_identity_group” “group” {
group_name = “vaultadmin”
}
data “vault_identity_entity” “entity” {
for_each = toset(data.vault_identity_group.group.member_entity_ids)
entity_id = each.value
}
data “null_data_source” “values” {
for_each = data.vault_identity_entity.entity
inputs = {
ssh_user_details = lookup(jsondecode(data.vault_identity_entity.entity[each.key].data_json),“name”,{})
}
}
"data_json": "{\"aliases\":[{\"canonical_id\":\"37b4c764-a4ec-dcb7-c3c7-31cf9c51e456\",\"creation_time\":\"2022-07-20T08:53:36.553988277Z\",\"custom_metadata\":null,\"id\":\"59fb8a9c-1c0c-0591-0f6e-1a153233e456\",\"last_update_time\":\"2022-07-20T08:53:36.553988277Z\",\"local\":false,\"merged_from_canonical_ids\":null,\"metadata\":null,\"mount_accessor\":\"auth_approle_12d1d8af\",\"mount_path\":\"auth/approle/\",\"mount_type\":\"approle\",\"name\":\"name.user#test.com\"}],\"creation_time\":\"2022-07-20T08:53:36.553982983Z\",\"direct_group_ids\":[\"e456cb46-2b51-737c-3277-64082352f47e\"],\"disabled\":false,\"group_ids\":[\"e456cb46-2b51-737c-3277-64082352f47e\"],\"id\":\"37b4c764-a4ec-dcb7-c3c7-31cf9c51e456\",\"inherited_group_ids\":[],\"last_update_time\":\"2022-07-20T08:53:36.553982983Z\",\"merged_entity_ids\":null,\"metadata\":null,\"name\":\"entity_ec5c123\",\"namespace_id\":\"root\",\"policies\":[]}",
Above scripts returns entity id entity_ec5c123. Any suggestions to retrieve the name field under aliases, which has users email id.
Maybe something like this?
data “vault_identity_group” “group” {
group_name = “vaultadmin”
}
data “vault_identity_entity” “entity” {
for_each = toset(data.vault_identity_group.group.member_entity_ids)
entity_id = each.value
}
locals {
mount_accessor = "auth_approle_12d1d8af"
# mount_path = "auth/approle/"
aliases = {for k,v in data.vault_identity_entity.entity : k => jsondecode(v.data_json, "aliases") }
}
data “null_data_source” “values” {
for_each = data.vault_identity_entity.entity
inputs = {
ssh_user_details = lookup({for alias in lookup(local.aliases, each.key, "ent_missing") : alias.mount_accessor => alias.name}, local.mount_accessor, "ent_no_alias_on_auth_method")
}
}
Basically you want to do a couple lookups here, you can simplify this if you can guarantee that each entity will only have a single alias, but otherwise you should probably be looking up the alias for a specific mount_accessor and discarding the other entries.
Haven't really done a bunch of testing with this code, but you should be able to run terraform console after doing an init on your workspace and figure out what the data structs look like if you have issues.

Unable to stop try_for_each_concurrent for mongodb client

I'm trying to create a GRPC endpoint that streams results from Cursor. To achieve this I'm using .map_ok and try_for_each_concurent methods. I'm using the try version of for each because I would like to stop the loop if any error occurs.
I'm experiencing an issue with trying to stop try_for_each_concurrent because is expecting me to return mongodb::error::Error and I'm unable to create it.
How to create mongodb error?
here is a snippet of code :-)
let mongo_db_collection_stream = mongo_db_collection_stream.map_ok(|doc| {
let parse_result: Result<Asset, _> = from_bson(Bson::Document(doc));
parse_result
});
mongo_db_collection_stream.try_for_each_concurrent(None, |vs_or_err| {
let mut tx_copy = tx.clone();
tokio::spawn(async move {
tx_copy.send(vs_or_err.clone()).await.unwrap();
});
async {
match vs_or_err {
Ok(v) => Ok(()),
Err(v) => Err((v)),
}
}
}).await;
output from console
error[E0271]: type mismatch resolving `<impl futures::Future as futures::Future>::Output == Result<(), mongodb::error::Error>`
--> file.rs:101:9
|
101 | / mongo_db_collection_stream.try_for_each_concurrent(None, |vs_or_err| {
102 | | let mut tx_copy = tx.clone();
103 | |
104 | | tokio::spawn(async move {
... |
113 | | }
114 | | }).await;
| |________________^ expected struct `mongodb::error::Error`, found enum `mongodb::bson::de::Error`
FYI looks like mongodb::bson::de::Error for not implement trait into mongodb::error::Error

How to list all files from resources folder with scala

Assume the following structure in your recources folder:
resources
├─spec_A
| ├─AA
| | ├─file-aev
| | ├─file-oxa
| | ├─…
| | └─file-stl
| ├─BB
| | ├─file-hio
| | ├─file-nht
| | ├─…
| | └─file-22an
| └─…
├─spec_B
| ├─AA
| | ├─file-aev
| | ├─file-oxa
| | ├─…
| | └─file-stl
| ├─BB
| | ├─file-hio
| | ├─file-nht
| | ├─…
| | └─file-22an
| └─…
└─…
The task is to read all files for a given specification spec_X one subfolder by one. For obvious reasons we do not want to have the exact names as string literals to open with Source.fromResource("spec_A/AA/…") for hundreds of files in the code.
Additionally, this solution should of course run inside the development environment, i.e. without being packaged into a jar.
The only option to list files inside a resource folder I found is with nio’s Filesystem concept, as this can load a jar-file as a file system. But this comes with two major downsides:
java.nio uses java Stream API, which I cannot collect from inside scala code: Collectors.toList() cannot be made to compile as it cannot determine the right type.
The filesystem needs different base paths for OS-filesystems and jar-file-based filesystems. So I need to manually differentiate between the two situations testing and jar-based running.
First lazy load the jar-filesystem if needed
private static FileSystem jarFileSystem;
static synchronized private FileSystem getJarFileAsFilesystem(String drg_file_root) throws URISyntaxException, IOException {
if (jarFileSystem == null) {
jarFileSystem = FileSystems.newFileSystem(ConfigFiles.class.getResource(drg_file_root).toURI(), Collections.emptyMap());
}
return jarFileSystem;
}
next do the limbo to figure out whether we are inside the jar or not by checking the protocol of the URL and return a Path. (Protocol inside the jar file will be jar:
static Path getPathForResource(String resourceFolder, String filename) throws IOException, URISyntaxException {
URL url = ConfigFiles.class.getResource(resourceFolder + "/" + filename);
return "file".equals(url.getProtocol())
? Paths.get(url.toURI())
: getJarFileAsFilesystem(resourceFolder).getPath(resourceFolder, filename);
}
And finally list and collect into a java list
static List<Path> listPathsFromResource(String resourceFolder, String subFolder) throws IOException, URISyntaxException {
return Files.list(getPathForResource(resourceFolder, subFolder))
.filter(Files::isRegularFile)
.sorted()
.collect(toList());
}
Only then we can go back do Scala and fetch is
class SpecReader {
def readSpecMessage(spec: String): String = {
List("CN", "DO", "KF")
.flatMap(ConfigFiles.listPathsFromResource(s"/spec_$spec", _).asScala.toSeq)
.flatMap(path ⇒ Source.fromInputStream(Files.newInputStream(path), "UTF-8").getLines())
.reduce(_ + " " + _)
}
}
object Main {
def main(args: Array[String]): Unit = {
System.out.println(new SpecReader().readSpecMessage(args.head))
}
}
I put a running mini project to proof it here: https://github.com/kurellajunior/list-files-from-resource-directory
But of course this is far from optimal. I wanto to elmiminate the two downsides mentioned above so, that
scala files only
no extra testing code in my production library
Here's a function for reading all files from a resource folder. My use case is with small files. Inspired by Jan's answers, but without needing a user-defined collector or messing with Java.
// Helper for reading an individual file.
def readFile(path: Path): String =
Source.fromInputStream(Files.newInputStream(path), "UTF-8").getLines.mkString("\n")
private var jarFS: FileSystem = null; // Static variable for storing a FileSystem. Will be loaded on the first call to getPath.
/**
* Gets a Path object corresponding to an URL.
* #param url The URL could follow the `file:` (usually used in dev) or `jar:` (usually used in prod) rotocols.
* #return A Path object.
*/
def getPath(url: URL): Path = {
if (url.getProtocol == "file")
Paths.get(url.toURI)
else {
// This hacky branch is to handle reading resource files from a jar (where url is jar:...).
val strings = url.toString.split("!")
if (jarFS == null) {
jarFS = FileSystems.newFileSystem(URI.create(strings(0)), Map[String, String]().asJava)
}
jarFS.getPath(strings(1))
}
}
/**
* Given a folder (e.g. "A"), reads all files under the resource folder (e.g. "src/main/resources/A/**") as a Seq[String]. */
* #param folder Relative path to a resource folder under src/main/resources.
* #return A sequence of strings. Each element corresponds to the contents of a single file.
*/
def readFilesFromResource(folder: String): Seq[String] = {
val url = Main.getClass.getResource("/" + folder)
val path = getPath(url)
val ls = Files.list(path)
ls.collect(Collectors.toList()).asScala.map(readFile) // Magic!
}
(not catered to example in question)
Relevant imports:
import java.nio.file._
import scala.collection.JavaConverters._ // Needed for .asScala
import java.net.{URI, URL}
import java.util.stream._
import scala.io.Source
Thanks to #TrebledJ ’s answer, this could be minimized to the following:
class ConfigFiles (val basePath String) {
lazy val jarFileSystem: FileSystem = FileSystems.newFileSystem(getClass.getResource(basePath).toURI, Map[String, String]().asJava);
def listPathsFromResource(folder: String): List[Path] = {
Files.list(getPathForResource(folder))
.filter(p ⇒ Files.isRegularFile(p, Array[LinkOption](): _*))
.sorted.toList.asScala.toList // from Stream to java List to Scala Buffer to scala List
}
private def getPathForResource(filename: String) = {
val url = classOf[ConfigFiles].getResource(basePath + "/" + filename)
if ("file" == url.getProtocol) Paths.get(url.toURI)
else jarFileSystem.getPath(basePath, filename)
}
}
special attention was necessary for the empty setting maps.
checking for the URL protocol seems inevitable. Git updated, PUll requests welcome: https://github.com/kurellajunior/list-files-from-resource-directory

Integrating gorm.Model fields into protobuf definitions

I am trying to figure out how to integrate the gorm.Model fields (deleted_at, create_at, id, etc) into my proto3 definitions. However, I can't a datetime type for proto3. I tried looking for documentation on how to serialize the gorm fields to strings (since proto3 handles strings) but I have not found anything.
Has anyone been able to successfully use the gorm model fields in their proto definitions? I'm using go-micro's plugin to generate *pb.go files.
Here's my current message definition which doesn't work. It seems like empty strings are being stored in the database for deleted_at since when querying for deleted_at is null the postgres database returns nothing.
message DatabaseConfig {
string address = 1;
int32 port = 2;
string databaseName = 3;
string username = 4;
string password = 5;
string databaseType = 6;
string quertStatement = 7;
int32 id = 8;
string createdAt = 9;
string updatedAt = 10;
string deletedAt = 11;
}
UPDATE:
I've updated my proto def to the following but gorm still isn't properly using the Id, CreatedAt, UpdatedAt, and DeletedAt fields
syntax = "proto3";
package go.micro.srv.importer;
import "google/protobuf/timestamp.proto";
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
service ImporterService {
rpc CreateDatabaseConfig(DatabaseConfig) returns (Response) {}
rpc RetrieveDatabaseConfig(GetRequest) returns (Response) {}
rpc UpdateDatabaseConfig(DatabaseConfig) returns (Response) {}
rpc DeleteDatabaseConfig(DatabaseConfig) returns (Response) {}
}
message GetRequest {}
message DatabaseConfig {
string address = 1;
int32 port = 2;
string databaseName = 3;
string username = 4;
string password = 5;
string databaseType = 6;
string quertStatement = 7;
int32 id = 8;
google.protobuf.Timestamp createdAt = 9 [(gogoproto.stdtime) = true];
google.protobuf.Timestamp updatedAt = 10 [(gogoproto.stdtime) = true];
google.protobuf.Timestamp deletedAt = 11 [(gogoproto.stdtime) = true];
}
message Response {
bool created = 1;
DatabaseConfig database_config = 2;
repeated DatabaseConfig databaseConfigs = 3;
}
The protoc-gen-gorm project did not work for me. It looks like there is some blending of proto2 and proto3 happening, and ultimately I was unable to get it to work.
My solution was to create a script to do post processing after I generate the go files from protobuf.
If this was my proto profile/profile.proto:
message Profile {
uint64 id = 1;
string name = 2;
bool active = 3;
// ...
}
Which created profile/profile.pb.go with standard protoc command:
// ...
type Profile struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"`
Active bool `protobuf:"varint,3,opt,name=active,proto3" json:"active,omitempty"`
}
// ...
I use this script gorm.sh:
#!/bin/bash
g () {
sed "s/json:\"$1,omitempty\"/json:\"$1,omitempty\" gorm:\"type:$2\"/"
}
cat $1 \
| g "id" "primary_key" \
| g "name" "varchar(100)" \
> $1.tmp && mv $1{.tmp,}
Which I invoke on my go file after it's generated with ./gorm.sh profile/profile.pb.go and the result of profile/profile.pb.go is:
// ...
type Profile struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty" gorm:"type:primary_key"`
Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty" gorm:"type:varchar(100)"`
Active bool `protobuf:"varint,3,opt,name=active,proto3" json:"active,omitempty"`
}
// ...
Try to use protoc-gen-gorm. It will create another file .pb.gorm.go
Might be an option to use something like this: https://github.com/favadi/protoc-go-inject-tag to generate the tags automatically (I am still looking into this myself)