Integrating Driverless AI with Anylogic Simulation, MOJO Pipeline error - anylogic

I am trying to integrate Driverless AI in my Anylogic Simulation project, but am unable to make the mojo pipeline work. I have placed the license.sig file in the same directory as mojo pipeline. The license.sig file I am using has been created using a text file of the license key, hope that is not a problem here. The error I am getting is "Model cannot be resolved"
Below is the code I used to run MOJO:
MojoFrameBuilder fb = stayDurationModel.getInputFrameBuilder();
MojoRowBuilder rb = fb.getMojoRowBuilder();
rb.setString("ORG", ORG);
rb.setString("Org_Product", Org_Product);
rb.setString("Org_Region", Org_Region);
rb.setDate("Due Date", new java.sql.Date(Due_Date.getTime()));
rb.setString("Customer Name", Customer_Name);
rb.setDouble("Invoice Amount", Invoice_Amount);
rb.setString("Bank Account Name", Bank_Account_Name);
fb.addRow(rb);
MojoFrame iframe = fb.toMojoFrame();
MojoFrame oframe = DSOModel.transform(iframe);
// Extract output from frame:
String[] outputArray = oframe.getColumn(0).getDataAsStrings();
double DSO = Double.valueOf(outputArray[0]);
lblPatientTxt.setText("Invoice(\n ORG: " + ORG + ",\n Org_Product: " +
Org_Product + ",\n Due Date " + Due_Date + ",\n Invoice Amount" +
Invoice_Amount + ",\n Customer Name " + Customer_Name + ",\n Bank Account
Name"
+ Bank_Account_Name + ")");
lblBedDays.setText("DSO" + DSO + " days");
stayDurationDS.add(DSO);
return DSO;
Have attached the console error logs and the code. Thanks in advance!
Console Error Message

Related

Rest Assured cannot be resolved to a variable

I have created a java project and am getting the error in my console
Exception in thread "main" java.lang.Error: Unresolved compilation problem:
RestAssured cannot be resolved to a variable
added jar- rest-assured-4.3.3-dist.zip- all extracted
from official website- https://github.com/rest-assured/rest-assured/wiki/Downloads
here is my code-
//java class basics
import io.restassured.RestAssured;
import static io.restassured.RestAssured.*;
public class Basics {
public static void main(String[] args) {
//adding given, when , then conditions
RestAssured.baseURI = "https://rahulshettyacademy.com"; //added the base URI here
//adding given condition here with log report
given().log().all().queryParam("key", "qaclick123").header("Content-Type", "application/json")
.body("{\r\n" +
" \"location\": {\r\n" +
" \"lat\": -38.383494,\r\n" +
" \"lng\": 33.427362\r\n" +
" },\r\n" +
" \"accuracy\": 50,\r\n" +
" \"name\": \" Muzammil house\",\r\n" +
" \"phone_number\": \"(+91) 983 893 3937\",\r\n" +
" \"address\": \"29, side layout, cohen 09\",\r\n" +
" \"types\": [\r\n" +
" \"shoe park\",\r\n" +
" \"shop\"\r\n" +
" ],\r\n" +
" \"website\": \"http://google.com\",\r\n" +
" \"language\": \"French-IN\"\r\n" +`enter code here`
"}") // end of body
.when().post("maps/api/place/add/json") // added the resource here
.then().log().all().assertThat().statusCode(200); // validating response here
}
}
How do I resolve this?
I assume that you are using maven. If that is the case you need to remove
<scope> test </scope>
node form your rest assured dependency in pom.xml file. If you are not using maven, then try to set build path and make sure that you added all your .jar files into the project.

Flink SQL Client connect to secured kafka cluster

I want to execute a query on Flink SQL Table backed by kafka topic of secured kafka cluster. I'm able to execute the query programmatically but unable to do the same through Flink SQL client. I'm not sure on how to pass JAAS config (java.security.auth.login.config) and other system properties through Flink SQL client.
Flink SQL query programmatically
private static void simpleExec_auth() {
// Create the execution environment.
final EnvironmentSettings settings = EnvironmentSettings.newInstance()
.inStreamingMode()
.withBuiltInCatalogName(
"default_catalog")
.withBuiltInDatabaseName(
"default_database")
.build();
System.setProperty("java.security.auth.login.config","client_jaas.conf");
System.setProperty("sun.security.jgss.native", "true");
System.setProperty("sun.security.jgss.lib", "/usr/libexec/libgsswrap.so");
System.setProperty("javax.security.auth.useSubjectCredsOnly","false");
TableEnvironment tableEnvironment = TableEnvironment.create(settings);
String createQuery = "CREATE TABLE test_flink11 ( " + "`keyid` STRING, " + "`id` STRING, "
+ "`name` STRING, " + "`age` INT, " + "`color` STRING, " + "`rowtime` TIMESTAMP(3) METADATA FROM 'timestamp', " + "`proctime` AS PROCTIME(), " + "`address` STRING) " + "WITH ( "
+ "'connector' = 'kafka', "
+ "'topic' = 'test_flink10', "
+ "'scan.startup.mode' = 'latest-offset', "
+ "'properties.bootstrap.servers' = 'kafka01.nyc.com:9092', "
+ "'value.format' = 'avro-confluent', "
+ "'key.format' = 'avro-confluent', "
+ "'key.fields' = 'keyid', "
+ "'value.fields-include' = 'EXCEPT_KEY', "
+ "'properties.security.protocol' = 'SASL_PLAINTEXT', 'properties.sasl.kerberos.service.name' = 'kafka', 'properties.sasl.kerberos.kinit.cmd' = '/usr/local/bin/skinit --quiet', 'properties.sasl.mechanism' = 'GSSAPI', "
+ "'key.avro-confluent.schema-registry.url' = 'http://kafka-schema-registry:5037', "
+ "'key.avro-confluent.schema-registry.subject' = 'test_flink6', "
+ "'value.avro-confluent.schema-registry.url' = 'http://kafka-schema-registry:5037', "
+ "'value.avro-confluent.schema-registry.subject' = 'test_flink4')";
System.out.println(createQuery);
tableEnvironment.executeSql(createQuery);
TableResult result = tableEnvironment
.executeSql("SELECT name,rowtime FROM test_flink11");
result.print();
}
This is working fine.
Flink SQL query through SQL client
Running this giving the following error.
Flink SQL> CREATE TABLE test_flink11 (`keyid` STRING,`id` STRING,`name` STRING,`address` STRING,`age` INT,`color` STRING) WITH('connector' = 'kafka', 'topic' = 'test_flink10','scan.startup.mode' = 'earliest-offset','properties.bootstrap.servers' = 'kafka01.nyc.com:9092','value.format' = 'avro-confluent','key.format' = 'avro-confluent','key.fields' = 'keyid', 'value.avro-confluent.schema-registry.url' = 'http://kafka-schema-registry:5037', 'value.avro-confluent.schema-registry.subject' = 'test_flink4', 'value.fields-include' = 'EXCEPT_KEY', 'key.avro-confluent.schema-registry.url' = 'http://kafka-schema-registry:5037', 'key.avro-confluent.schema-registry.subject' = 'test_flink6', 'properties.security.protocol' = 'SASL_PLAINTEXT', 'properties.sasl.kerberos.service.name' = 'kafka', 'properties.sasl.kerberos.kinit.cmd' = '/usr/local/bin/skinit --quiet', 'properties.sasl.mechanism' = 'GSSAPI');
Flink SQL> select * from test_flink11;
[ERROR] Could not execute SQL statement. Reason:
java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is /tmp/jaas-6309821891889949793.conf
There is nothing in /tmp/jaas-6309821891889949793.conf except the following comment
# We are using this file as an workaround for the Kafka and ZK SASL implementation
# since they explicitly look for java.security.auth.login.config property
# Please do not edit/delete this file - See FLINK-3929
SQL client run command
bin/sql-client.sh embedded --jar flink-sql-connector-kafka_2.11-1.12.0.jar --jar flink-sql-avro-confluent-registry-1.12.0.jar
Flink cluster command
bin/start-cluster.sh
How to pass this java.security.auth.login.config and other system properties (that I'm setting in the above java code snippet), for SQL client?
flink-conf.yaml
security.kerberos.login.use-ticket-cache: true
security.kerberos.login.principal: XXXXX#HADOOP.COM
security.kerberos.login.use-ticket-cache: false
security.kerberos.login.keytab: /path/to/kafka.keytab
security.kerberos.login.principal: XXXX#HADOOP.COM
security.kerberos.login.contexts: Client,KafkaClient
I haven't really tested whether this solution is feasible, you can try it out, hope it will help you.

Failed to connect to Confluent Platform Schema Registry - Apache Flink SQL Confluent Avro Format

I am using Confluent managed Kafka cluster, Schema Registry service and trying to process Debezium messages in a Flink job. The job is configured to use Table & SQL Connectors and Confluent Avro Format.
However the job is not able to connect to Schema Registry and raises 401 error.
Table Connector configurations
tEnv.executeSql("CREATE TABLE flink_test_1 (\n" +
" ORDER_ID STRING,\n" +
" ORDER_TYPE STRING,\n" +
" USER_ID STRING,\n" +
" ORDER_SUM BIGINT\n" +
") WITH (\n" +
" 'connector' = 'kafka',\n" +
" 'topic' = 'flink_test_1',\n" +
" 'scan.startup.mode' = 'earliest-offset',\n" +
" 'format' = 'avro-confluent',\n" +
" 'avro-confluent.schema-registry.url' = 'https://<SR_ENDPOINT>',\n" +
" 'avro-confluent.schema-registry.subject' = 'flink_test_1-value',\n" +
" 'properties.basic.auth.credentials.source' = 'USER_INFO',\n" +
" 'properties.basic.auth.user.info' = '<SR_API_KEY>:<SR_API_SECRET>',\n" +
" 'properties.bootstrap.servers' = '<CLOUD_BOOTSTRAP_SERVER_ENDPOINT>:9092',\n" +
" 'properties.security.protocol' = 'SASL_SSL',\n" +
" 'properties.ssl.endpoint.identification.algorithm' = 'https',\n" +
" 'properties.sasl.mechanism' = 'PLAIN',\n" +
" 'properties.sasl.jaas.config' = 'org.apache.kafka.common.security.plain.PlainLoginModule required username=\"<CLUSTER_API_KEY>\" password=\"<CLUSTER_API_SECRET>\";'\n" +
")");
Error Message
Caused by: java.io.IOException: Failed to deserialize Avro record.
at org.apache.flink.formats.avro.AvroRowDataDeserializationSchema.deserialize(AvroRowDataDeserializationSchema.java:101)
at org.apache.flink.formats.avro.AvroRowDataDeserializationSchema.deserialize(AvroRowDataDeserializationSchema.java:44)
at org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82)
at org.apache.flink.streaming.connectors.kafka.table.DynamicKafkaDeserializationSchema.deserialize(DynamicKafkaDeserializationSchema.java:113)
at org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:179)
at org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher.runFetchLoop(KafkaFetcher.java:142)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:826)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:66)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:241)
Caused by: java.io.IOException: Could not find schema with id 100256 in registry
at org.apache.flink.formats.avro.registry.confluent.ConfluentSchemaRegistryCoder.readSchema(ConfluentSchemaRegistryCoder.java:77)
at org.apache.flink.formats.avro.RegistryAvroDeserializationSchema.deserialize(RegistryAvroDeserializationSchema.java:70)
at org.apache.flink.formats.avro.AvroRowDataDeserializationSchema.deserialize(AvroRowDataDeserializationSchema.java:98)
... 9 more
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unauthorized; error code: 401
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:292)
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:352)
at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:660)
at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:642)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:217)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaBySubjectAndId(CachedSchemaRegistryClient.java:291)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaById(CachedSchemaRegistryClient.java:276)
at io.confluent.kafka.schemaregistry.client.SchemaRegistryClient.getById(SchemaRegistryClient.java:64)
at org.apache.flink.formats.avro.registry.confluent.ConfluentSchemaRegistryCoder.readSchema(ConfluentSchemaRegistryCoder.java:74)
... 11 more
I successfully tested the connection to Schema Registry by:
curl -u <SR_API_KEY>:<SR_API_SECRET> https://<SR_ENDPOINT>
It seem like the error message "io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unauthorized; error code: 401" clearly says that <SR_API_KEY>:<SR_API_SECRET> were not passed to the Confluent Schema Registry.
I checked the documentation here https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/formats/avro-confluent.html, where only 3 Format options described: ["format", "avro-confluent.schema-registry.url", "avro-confluent.schema-registry.subject"] and no options for specifying SR_API_KEY and SR_API_SECRET.
I can't figure out how to successfully connect to the secure schema registry from the Flink program.
Is this connection type supported by Flink?
Does anyone know what the correct connection configuration should look like?
Thanks.
I got the same issue.
After some investigation, I found a Jira ticket about this issue.
If you can't upgrade your flink version, you can first use DataStream API
to consume data and then convert it to Table.

Getting Object tag value in AWS S3

I am using scala to get information about my objects that are in S3 I am intersted in getting each object I have inside S3 Bucket his tag value so far I have accomplished this code to get me information about my objects but did not succeeded in getting its tagging value: I used this code in scala:
def retrieveObjectTags(keyName: String): Unit ={
try {
println("Listing objects")
val req: ListObjectsV2Request =
new ListObjectsV2Request().withBucketName(bucketName).withMaxKeys(2)
var result: ListObjectsV2Result = null
do {
result = client.listObjectsV2(req)
for (objectSummary <- result.getObjectSummaries) {
println(
" - " + objectSummary.getKey + " " + "(size = " + objectSummary.getSize +
")")
println(objectSummary.getETag)
}
println("Next Continuation Token : " + result.getNextContinuationToken)
req.setContinuationToken(result.getNextContinuationToken)
} while (result.isTruncated == true);
}catch {
case ase: AmazonServiceException => {
println(
"Caught an AmazonServiceException, " + "which means your request made it " +
"to Amazon S3, but was rejected with an error response " +
"for some reason.")
println("Error Message: " + ase.getMessage)
println("HTTP Status Code: " + ase.getStatusCode)
println("AWS Error Code: " + ase.getErrorCode)
println("Error Type: " + ase.getErrorType)
println("Request ID: " + ase.getRequestId)
}
case ace: AmazonClientException => {
println(
"Caught an AmazonClientException, " + "which means the client encountered " +
"an internal error while trying to communicate" +
" with S3, " +
"such as not being able to access the network.")
println("Error Message: " + ace.getMessage)
}
}
// val getTaggingRequest = new GetObjectTaggingRequest(bucketName,keyName)
// var getTagResult = client.getObjectTagging(getTaggingRequest)
//println(getTaggingRequest)
var tag: Tag = new Tag()
println("tag name:" + tag.getValue)
}
as for the remarked lines I have encounter a problem with it, what other way I can use to solve this problem?

Enable Caching for all reports in SSRS Report Server

I have more than 100 reports in SSRS report server. I need to enable caching for all of those. Right now I am enabling caching through the report manager for each and every report.
Can we add caching in any of the report servers config files? So that we can enable caching for all reports at a single place.
Any help will be appreciated
Thanks
AJ
Below is the script that I used to enable caching in minutes on a list of reports.
Save it as setreportscaching.rss and then run it from the command line:
rs.exe -i setreportscaching.rss -e Mgmt2010 -t -s http://mySsrsBox:8080/ReportServer -v ReportNamesList="OneReport,AnotherReport,YetAnotherOne" -v CacheTimeMinutes="333" -v TargetFolder="ReportsFolderOnServer"
It is easy to modify it to loop through files in some folder rather then take csv list of reports. It has some silly piece of diagnostics that can be commented out for speed.
Public Sub Main()
Dim reportNames As String() = Nothing
Dim reportName As String
Dim texp As TimeExpiration
Dim reportPath As String
Console.WriteLine("Looping through reports: {0}", ReportNamesList)
reportNames = ReportNamesList.Split(","c)
For Each reportName In reportNames
texp = New TimeExpiration()
texp.Minutes = CacheTimeMinutes
reportPath = "/" + TargetFolder + "/" + reportName
'feel free to comment out this diagnostics to speed things up
Console.WriteLine("Current caching for " + reportName + DisplayReportCachingSettings(reportPath))
'this call sets desired caching option
rs.SetCacheOptions(reportPath, true, texp)
'feel free to comment out this diagnostics to speed things up
Console.WriteLine("New caching for " + reportName + DisplayReportCachingSettings(reportPath))
Next
End Sub
Private Function DisplayReportCachingSettings(reportPath as string)
Dim isCacheSet As Boolean
Dim expItem As ExpirationDefinition = New ExpirationDefinition()
Dim theResult As String
isCacheSet = rs.GetCacheOptions(reportPath, expItem)
If isCacheSet = false Or expItem is Nothing Then
theResult = " is not defined."
Else
If expItem.GetType.Name = "TimeExpiration" Then
theResult = " is " + (CType(expItem, TimeExpiration)).Minutes.ToString() + " minutes."
ElseIf expItem.GetType.Name = "ScheduleExpiration" Then
theResult = " is a schedule"
Else
theResult = " is " + expItem.GetType.Name
End If
End If
DisplayReportCachingSettings = theResult
End Function