I'm working on school schedule by optapy and I'm still a beginner I want to fill in all classes with teachers,set the constraints but output the same - optapy

[These are the outputs]
[1]: https://i.stack.imgur.com/aVepW.png
constraints.py
from optapy import constraint_provider, get_class
from optapy.types import Joiners, HardSoftScore
from domain import TimeTable, Lesson, Room
from datetime import datetime, date, timedelta
TimeTableClass = get_class(TimeTable)
LessonClass = get_class(Lesson)
RoomClass = get_class(Room)
today = date.today()
def within30Mins(lesson1, lesson2):
between = datetime.combine(today, lesson1.timeslot.endTime) - datetime.combine(today, lesson2.timeslot.startTime)
return timedelta(minutes=0) <= between <= timedelta(minutes=30)
1-Hard constraints and Soft constraints
#constraint_provider
def defineConstraints(constraintFactory):
return [
roomConflict(constraintFactory),
teacherConflict(constraintFactory),
#studentGroupConflict(constraintFactory),
2- Soft constraints
teacherRoomStability(constraintFactory),
teacherTimeEfficiency(constraintFactory),
#studentGroupSubjectVariety(constraintFactory),
# curriculum_needs_to_be_met(constraintFactory),
]
A room can accommodate at most one lesson at the same time.
def roomConflict(constraintFactory):
return constraintFactory \
.fromUniquePair(LessonClass,
# ... in the same timeslot ...
[Joiners.equal(lambda lesson: lesson.timeslot),
# ... in the same room ...
Joiners.equal(lambda lesson: lesson.room),
Joiners.equal(lambda lesson: lesson.timeslot.dayOfWeek)])\
.penalize("Room conflict", HardSoftScore.ONE_HARD)
A teacher can teach at most one lesson at the same time.
def teacherConflict(constraintFactory):
return constraintFactory \
.fromUniquePair(LessonClass,
[Joiners.equal(lambda lesson: lesson.timeslot),
Joiners.equal(lambda lesson: lesson.teacher)]) \
.penalize("Teacher conflict", HardSoftScore.ONE_HARD)
A student can attend at most one lesson at the same time.
def studentGroupConflict(constraintFactory):
return constraintFactory \
.fromUniquePair(LessonClass,
[Joiners.equal(lambda lesson: lesson.timeslot),
Joiners.equal(lambda lesson: lesson.studentGroup)]) \
.penalize("Student group conflict", HardSoftScore.ONE_HARD)
A teacher prefers to teach in a single room.
def teacherRoomStability(constraintFactory):
return constraintFactory \
.fromUniquePair(LessonClass,
[Joiners.equal(lambda lesson: lesson.teacher)]) \
.filter(lambda lesson1, lesson2: lesson1.room != lesson2.room) \
.penalize("Teacher room stability", HardSoftScore.ONE_SOFT)
** A teacher prefers to teach sequential lessons and dislikes gaps between lessons.**
def teacherTimeEfficiency(constraintFactory):
return constraintFactory.from_(LessonClass)\
.join(LessonClass, [Joiners.equal(lambda lesson: lesson.teacher),
Joiners.equal(lambda lesson: lesson.timeslot.dayOfWeek)]) \
.filter(within30Mins) \
.reward("Teacher time efficiency", HardSoftScore.ONE_SOFT)
A student group dislikes sequential lessons on the same subject.
def studentGroupSubjectVariety(constraintFactory):
return constraintFactory.from_(LessonClass) \
.join(LessonClass,
[Joiners.equal(lambda lesson: lesson.subject),
Joiners.equal(lambda lesson: lesson.studentGroup),
Joiners.equal(lambda lesson: lesson.timeslot.dayOfWeek)]) \
.filter(within30Mins) \
.penalize("Student group subject variety", HardSoftScore.ONE_SOFT)

Related

case when in merge statement databricks

I am trying to upsert in Databricks using merge statement in pyspark. I wanted to know if using expressions (e.g. adding two columns, case when) allowed in the whenMatchedUpdate part. For example I want to do something like this
deltaTableTarget = DeltaTable.forPath(spark, delta_table_path)
deltaTableTarget.alias('TgtCrmUserAggr') \
.merge(
broadcast(df_transformed.alias('DeltaSource')),
"DeltaSource.primary_key==TargetTable.primary_key"
) \
.whenMatchedUpdate(set =
{
"aggcount":"DeltaSource.count + TargetTable.count",
"max_date": "case when DeltaSource.max_date > TargetTable.max_date then DeltaSource.max_date else TargetTable.max_date end"
}
)
.whenNotMatchedInsert().insertAll()
)\
.execute()
If I understand your logic well, you can just take the max value of the 2 columns, right?
deltaTableTarget = DeltaTable.forPath(spark, delta_table_path)
deltaTableTarget.alias('TgtCrmUserAggr') \
.merge(
broadcast(df_transformed.alias('DeltaSource')),
"DeltaSource.primary_key==TargetTable.primary_key"
) \
.whenMatchedUpdate(set =
{
"aggcount":"DeltaSource.count + TargetTable.count",
"max_date": "GREATEST(DeltaSource.max_date,TargetTable.max_date)"
}
)
.whenNotMatchedInsert().insertAll()
)\
.execute()
If this is not correct, something you could do is use multiple whenMatchedUpdate() functions with a condition.
deltaTableTarget = DeltaTable.forPath(spark, delta_table_path)
deltaTableTarget.alias('TgtCrmUserAggr') \
.merge(
broadcast(df_transformed.alias('DeltaSource')),
"DeltaSource.primary_key==TargetTable.primary_key"
) \
.whenMatchedUpdate(condition= 'DeltaSource.max_date > TargetTable.max_date',
set =
{
"aggcount":"DeltaSource.count + TargetTable.count",
"max_date": "DeltaSource.max_date"
}
)
.whenMatchedUpdate(set =
{
"aggcount":"DeltaSource.count + TargetTable.count",
"max_date": "TargetTable.max_date"
}
)
.whenNotMatchedInsert().insertAll()
)\
.execute()

Rest Client gives Bad Request for HTTP PUT request

I'm new to Rest Client development. Need your help in figuring out how to get a proper response for below rest service.
curl --location --request PUT 'sandbox-url/TokenGeneratorAPI/v1/update_pay_status' \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic {{token}' \
--data-raw '{
"pay_id":000000000,
"status":1,
"amount":0.00,
"pay_method":0,
"pay_sys_ref":"test"
}'
I created a class in client side for request object like below.
public class PaymentRqBean {
#XmlElement(name="pay_id")
String pay_id;
#XmlElement(name="status")
String status;
#XmlElement(name="amount")
String amount;
#XmlElement(name="pay_method")
String pay_method;
#XmlElement(name="pay_sys_ref")
String pay_sys_ref;
public String getPay_id() {
return pay_id;
}
public void setPay_id(String pay_id) {
this.pay_id = pay_id;
}
}
......and getters setters for other attributes
And created a method for calling the web service in another class like below.
public PaymentRsBean callWS(PaymentRqBean pReq) {
PaymentRsBean prs = new PaymentRsBean();
Client client = ClientBuilder.newClient();
client.register(new Authenticator(com.boc.conf.Configurations.objProperty.getProperty("ikUserName"), com.boc.conf.Configurations.objProperty.getProperty("ikPassword")));
WebTarget webTarget = client.target(url);
Invocation.Builder invocationBuilder = webTarget.request(MediaType.APPLICATION_JSON);
Response response = invocationBuilder.accept(MediaType.APPLICATION_JSON).put(Entity.json(pReq));
I'm getting HTTP 400 BAD REQUEST for above. I would be so thankful for any help given in solving this.
(Is it because #XmlRootElement? )
The problem is the way you are defining the attributes of your PaymentRqBean class.
The variables 'pay_id', 'status', 'amount' and 'pay_method' must be of numeric type (int, double...), since the request you launch indicates that they are numeric, not Strings.
"pay_id":000000000,
"status":1,
"amount":0.00,
"pay_method":0

Spark stream stops abruptly - "the specified path does not exist"

I am working on the spark structure streaming. My Stream works fine but after sometime it just stops because of below issue.
Any suggestion what could be the reason and how to resolve this issue.
java.io.FileNotFoundException: Operation failed: "The specified path does not exist.", 404, GET, https://XXXXXXXX.dfs.core.windows.net/output?upn=false&resource=filesystem&maxResults=5000&directory=XXXXXXXX&timeout=90&recursive=true, PathNotFound, "The specified path does not exist. RequestId:d1b7c77f-e01f-0027-7f09-4646f7000000 Time:2022-04-01T20:47:30.1791444Z"
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.checkException(AzureBlobFileSystem.java:1290)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listKeysWithPrefix(AzureBlobFileSystem.java:530)
at com.databricks.tahoe.store.EnhancedAzureBlobFileSystemUpgrade.listKeysWithPrefix(EnhancedFileSystem.scala:605)
at com.databricks.tahoe.store.EnhancedDatabricksFileSystemV2.$anonfun$listKeysWithPrefix$1(EnhancedFileSystem.scala:374)
at com.databricks.backend.daemon.data.client.DBFSV2.$anonfun$listKeysWithPrefix$1(DatabricksFileSystemV2.scala:247)
at com.databricks.logging.UsageLogging.$anonfun$recordOperation$1(UsageLogging.scala:395)
at com.databricks.logging.UsageLogging.executeThunkAndCaptureResultTags$1(UsageLogging.scala:484)
at com.databricks.logging.UsageLogging.$anonfun$recordOperationWithResultTags$4(UsageLogging.scala:504)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:266)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:261)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:258)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.withAttributionContext(DatabricksFileSystemV2.scala:510)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:305)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:297)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.withAttributionTags(DatabricksFileSystemV2.scala:510)
at com.databricks.logging.UsageLogging.recordOperationWithResultTags(UsageLogging.scala:479)
at com.databricks.logging.UsageLogging.recordOperationWithResultTags$(UsageLogging.scala:404)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.recordOperationWithResultTags(DatabricksFileSystemV2.scala:510)
at com.databricks.logging.UsageLogging.recordOperation(UsageLogging.scala:395)
at com.databricks.logging.UsageLogging.recordOperation$(UsageLogging.scala:367)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.recordOperation(DatabricksFileSystemV2.scala:510)
at com.databricks.backend.daemon.data.client.DBFSV2.listKeysWithPrefix(DatabricksFileSystemV2.scala:240)
at com.databricks.tahoe.store.EnhancedDatabricksFileSystemV2.listKeysWithPrefix(EnhancedFileSystem.scala:374)
at com.databricks.tahoe.store.AzureLogStore.listKeysWithPrefix(AzureLogStore.scala:54)
at com.databricks.tahoe.store.DelegatingLogStore.listKeysWithPrefix(DelegatingLogStore.scala:251)
at com.databricks.sql.fileNotification.autoIngest.FileEventBackfiller$.listFiles(FileEventWorkerThread.scala:967)
at com.databricks.sql.fileNotification.autoIngest.FileEventBackfiller.runInternal(FileEventWorkerThread.scala:876)
at com.databricks.sql.fileNotification.autoIngest.FileEventBackfiller.run(FileEventWorkerThread.scala:809)
Caused by: Operation failed: "The specified path does not exist.", 404, GET, https://XXXXXXXXXX.dfs.core.windows.net/output?upn=false&resource=filesystem&maxResults=5000&directory=XXXXXXXX&timeout=90&recursive=true, PathNotFound, "The specified path does not exist. RequestId:02ae07cf-901f-0001-080e-46dd43000000 Time:2022-04-01T21:21:40.2136657Z"
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:241)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:235)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listFiles(AzureBlobFileSystemStore.java:1112)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.access$200(AzureBlobFileSystemStore.java:143)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore$1.fetchMoreResults(AzureBlobFileSystemStore.java:1052)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore$1.(AzureBlobFileSystemStore.java:1033)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listKeysWithPrefix(AzureBlobFileSystemStore.java:1029)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listKeysWithPrefix(AzureBlobFileSystem.java:527)
... 27 more
Below is my code:
from pyspark.sql.functions import *
from pyspark.sql.types import *
from pyspark.sql.types import StructType, StringType
from pyspark.sql import functions as F
from delta.tables import *
spark.sql("set spark.sql.files.ignoreMissingFiles=true")
filteredRawDF = ""
try:
filteredRawDF = spark.readStream.format("cloudFiles") \
.option("cloudFiles.format", "json") \
.option("cloudFiles.schemaLocation", landingcheckPointFilePath) \
.option("cloudFiles.inferColumnTypes", "true") \
.load(landingFilePath) \
.select(from_json('body', schema).alias('temp')) \
.select(explode("temp.report.data").alias("details")) \
.select("details",
explode("details.breakdown").alias("inner_breakdown")) \
.select("details","inner_breakdown",
explode("inner_breakdown.breakdown").alias("outer_breakdown"))\
.select(to_timestamp(col("details.name"), "yyyy-MM-
dd'T'HH:mm:ss+SSSS").alias('datetime'),
col("details.year"),
col("details.day"),
col("details.hour"),
col("details.minute"),
col("inner_breakdown.name").alias("hotelName"),
col("outer_breakdown.name").alias("checkindate"),
col("outer_breakdown.counts")[0].cast("int").alias("HdpHits"))
except Exception as e:
print(e)
query = filteredRawDF \
.writeStream \
.format("delta") \
.option("mergeSchema", "true") \
.outputMode("append") \
.option("checkpointLocation", checkPointPath) \
.trigger(processingTime='50 seconds') \
.start(savePath) '''
Thanks

How to convert this code to scala (using adal to generate Azure AD token)

I am currently working on a Scala code that can establish a connection to an SQL server database using AD token.
There are too little documentation on the subject online so I tries to work on it using python. Now it is working, I am looking to convert my code to Scala.
Here is the python script:
context = adal.AuthenticationContext(AUTHORITY_URL)
token = context.acquire_token_with_client_credentials("https://database.windows.net/", CLIENT_ID, CLIENT_SECRET)
access_token = token["accessToken"]
df= spark.read \
.format("com.microsoft.sqlserver.jdbc.spark") \
.option("url", URL) \
.option("dbtable", "tab1") \
.option("accessToken", access_token) \
.option("hostNameInCertificate", "*.database.windows.net") \
.load()
df.show()
Here is the Java code that you can use as a base, using the acquireToken function:
import com.microsoft.aad.adal4j.AuthenticationContext;
import com.microsoft.aad.adal4j.AuthenticationResult;
import com.microsoft.aad.adal4j.ClientCredential;
...
String authority = "https://login.microsoftonline.com/<org-uuid>;
ExecutorService service = Executors.newFixedThreadPool(1);
AuthenticationContext context = new AuthenticationContext(authority, true, service);
ClientCredential credential = new ClientCredential("sp-client-id", "sp-client-secret");
AuthenticationResult result = context.acquireToken("resource_id", credential, null).get();
// get token
String token = result.getAccessToken()
P.S. But really, ADAL's usage isn't recommended anymore, it's better to use MSAL instead (here is the migration guide)

How to create a graphql schema that can be searched on different fields?

I'm testing out sangria to build a graphql/relay server. I have a very simple User class:
case class User(
id: Int,
username: String,
gender: Gender.Value)
I want to allow queries by either ID or username. I have created a schema that allows this, but the fields have different names:
val Query = ObjectType(
"Query", fields[UserRepo, Unit](
Field("user", OptionType(User),
arguments = ID :: Nil,
resolve = ctx => ctx.ctx.getUser(ctx arg ID)),
Field("userByUsername", OptionType(User),
arguments = Username :: Nil,
resolve = ctx => ctx.ctx.getUserByUsername(ctx arg Username))
))
Unfortunately, I need to query these with different fields names, user and userByUsername, e.g.:
curl -G localhost:8080/graphql
--data-urlencode 'query={userByUsername(username: "Leia Skywalker") {id, username, gender}}'
or
curl -G localhost:8080/graph
--data-urlencode "query={user(id: 1025) {id, username, gender}}"
How can I create a schema that allows a single field called user to be queried on either ID or username? E.g. both of the following should return the same user object:
curl -G localhost:8080/graphql
--data-urlencode 'query={user(username: "Leia Skywalker") {id, username, gender}}'
or
curl -G localhost:8080/graph
--data-urlencode "query={user(id: 1025) {id, username, gender}}"
I finally worked it out:
val ID = Argument("id", OptionInputType(IntType), description = "id of the user")
val Username = Argument("username", OptionInputType(StringType), description = "username of the user")
val Query = ObjectType(
"Query", fields[UserRepo, Unit](
Field("user", OptionType(User),
arguments = List(ID, Username),
resolve = ctx => ctx.ctx.getUser(ctx.argOpt(ID), ctx.argOpt(Username)))
))
And getUser looks like:
def getUser(id: Option[Int], username: Option[String]): Option[User] = {
if (id.isEmpty && username.isEmpty) {
None
}
else {
id.flatMap(i => users.find(_.id == i))
.orElse(username.flatMap(u => users.find(_.username == u)))
}
}