Number of days between two Instants doesn't change when changing a specific end date - java-17

I have a class that I use to get the number of days between 2 Instants :
public static Long getDaysBetween(final Instant startInstant, final Instant endInstant) {
final ZonedDateTime startDate = startInstant.atZone(ZoneId.systemDefault());
final ZonedDateTime endDate = endInstant.atZone(ZoneId.systemDefault());
return ChronoUnit.DAYS.between(startDate, endDate);
}
And the corresponding test class
class WeekCalculatorTest {
#Test
void givenStartAndEndInstant_whenCalculatingTheNumberOfDaysBetween_thenTheResultShouldBe() {
final String start = "2022-09-01T00:00:00Z";
final String end = "2022-10-31T00:00:00Z";
final Instant startInstant = Instant.parse(start);
final Instant endInstant = Instant.parse(end);
assertThat(WeekCalculator.getDaysBetween(startInstant, endInstant), is(59L));
}
}
I don't understand why when I change the end variable to final String end = "2022-10-30T00:00:00Z";, my test is still passing.

Related

processing data before presentation

I have dataset (from JSON source) with cumulative values. It looks like this:
Could I extract from this dataset delta from last hour or last day (for example, count from 0 since last midnight?)
What you are asking about falls squarely in the realm of process data as it usually comes from control systems aka process controls systems. There may be DCS (Distributed Control Systems) or SCADA out in the field that act as a focal point on receiving data. And there may be a process historian or time-series database for accessing that data, if not on an enterprise level at least not within the process controls network.
Much of the engineering associated with process data has been established for many, many decades. For my examples, I did not want to write too many custom classes so I will use some everyday .NET objects. However, I am adhering to 2 such well-regarded principles about process data:
All times will be in UTC. Usually one does not show the UtcTime until the very last moment when displaying to a local user.
Process Data acknowledges the Quality of a value. While there can be dozens of bad states associated with such Quality, I will use a simple binary approach of good or bad. Since I use double, a value is good as long as it is not double.NaN.
That said, I assume you have a class that looks similar to:
public class JsonDto
{
public string Id { get; set; }
public DateTime Time { get; set; }
public double value { get; set; }
}
Granted your class name may be different, but the main thing is this class holds an individual instance of process data. When you read a JSON file, it will produce a List<jsonDto> instance.
You will need lots of methods to transform the data to something a wee bit more useable in order to get to where the rubber finally meets the road: producing hourly differences. But that requires producing hourly values because there is no guarantee that your recorded values occur exactly on each hour.
ProcessData Class - lots of methods
public static class ProcessData
{
public enum CalculationTimeBasis { Auto = 0, EarliestTime, MostRecentTime, MidpointTime }
public static Dictionary<string, SortedList<DateTime, double>> GetTagTimedValuesMap(IEnumerable<JsonDto> jsonDto)
{
var map = new Dictionary<string, SortedList<DateTime, double>>();
var tagnames = jsonDto.Select(x => x.Id).Distinct().OrderBy(x => x);
foreach (var tagname in tagnames)
{
map.Add(tagname, new SortedList<DateTime, double>());
}
var orderedValues = jsonDto.OrderBy(x => x.Id).ThenBy(x => x.Time.ToUtcTime());
foreach (var item in orderedValues)
{
map[item.Id].Add(item.Time.ToUtcTime(), item.value);
}
return map;
}
public static DateTimeKind UnspecifiedDefaultsTo { get; set; } = DateTimeKind.Utc;
public static DateTime ToUtcTime(this DateTime value)
{
// Unlike ToUniversalTime(), this method assumes any Unspecified Kind may be Utc or Local.
if (value.Kind == DateTimeKind.Unspecified)
{
if (UnspecifiedDefaultsTo == DateTimeKind.Utc)
{
value = DateTime.SpecifyKind(value, DateTimeKind.Utc);
}
else if (UnspecifiedDefaultsTo == DateTimeKind.Local)
{
value = DateTime.SpecifyKind(value, DateTimeKind.Local);
}
}
return value.ToUniversalTime();
}
private static DateTime TruncateTime(this DateTime value, TimeSpan interval) => new DateTime(TruncateTicks(value.Ticks, interval.Ticks)).ToUtcTime();
private static long TruncateTicks(long ticks, long interval) => (interval == 0) ? ticks : (ticks / interval) * interval;
public static SortedList<DateTime, double> GetInterpolatedValues(SortedList<DateTime, double> recordedValues, TimeSpan interval)
{
if (interval <= TimeSpan.Zero)
{
throw new ArgumentOutOfRangeException($"{nameof(interval)} TimeSpan must be greater than zero");
}
var interpolatedValues = new SortedList<DateTime, double>();
var previous = recordedValues.First();
var intervalTimestamp = previous.Key.TruncateTime(interval);
foreach (var current in recordedValues)
{
if (current.Key == intervalTimestamp)
{
// It's easy when the current recorded value aligns perfectly on the desired interval.
interpolatedValues.Add(current.Key, current.Value);
intervalTimestamp += interval;
}
else if (current.Key > intervalTimestamp)
{
// We do not exactly align at the desired time, so we must interpolate
// between the "last recorded data" BEFORE the desired time (i.e. previous)
// and the "first recorded data" AFTER the desired time (i.e. current).
var interpolatedValue = GetInterpolatedValue(intervalTimestamp, previous, current);
interpolatedValues.Add(interpolatedValue.Key, interpolatedValue.Value);
intervalTimestamp += interval;
}
previous = current;
}
return interpolatedValues;
}
private static KeyValuePair<DateTime, double> GetInterpolatedValue(DateTime interpolatedTime, KeyValuePair<DateTime, double> left, KeyValuePair<DateTime, double> right)
{
if (!double.IsNaN(left.Value) && !double.IsNaN(right.Value))
{
double totalDuration = (right.Key - left.Key).TotalSeconds;
if (Math.Abs(totalDuration) > double.Epsilon)
{
double partialDuration = (interpolatedTime - left.Key).TotalSeconds;
double factor = partialDuration / totalDuration;
double calculation = left.Value + ((right.Value - left.Value) * factor);
return new KeyValuePair<DateTime, double>(interpolatedTime, calculation);
}
}
return new KeyValuePair<DateTime, double>(interpolatedTime, double.NaN);
}
public static SortedList<DateTime, double> GetDeltaValues(SortedList<DateTime, double> values, CalculationTimeBasis timeBasis = CalculationTimeBasis.Auto)
{
const CalculationTimeBasis autoDefaultsTo = CalculationTimeBasis.MostRecentTime;
var deltas = new SortedList<DateTime, double>(capacity: values.Count);
var previous = values.First();
foreach (var current in values.Skip(1))
{
var time = GetTimeForBasis(timeBasis, previous.Key, current.Key, autoDefaultsTo);
var diff = current.Value - previous.Value;
deltas.Add(time, diff);
previous = current;
}
return deltas;
}
private static DateTime GetTimeForBasis(CalculationTimeBasis timeBasis, DateTime earliestTime, DateTime mostRecentTime, CalculationTimeBasis autoDefaultsTo)
{
if (timeBasis == CalculationTimeBasis.Auto)
{
// Different (future) methods calling this may require different interpretations of Auto.
// Thus we leave it to the calling method to declare what Auto means to it.
timeBasis = autoDefaultsTo;
}
switch (timeBasis)
{
case CalculationTimeBasis.EarliestTime:
return earliestTime;
case CalculationTimeBasis.MidpointTime:
return new DateTime((earliestTime.Ticks + mostRecentTime.Ticks) / 2L).ToUtcTime();
case CalculationTimeBasis.MostRecentTime:
return mostRecentTime;
case CalculationTimeBasis.Auto:
default:
return earliestTime;
}
}
}
Usage Example
var inputValues = new List<JsonDto>();
// TODO: Magically populate inputValues
var tagDataMap = ProcessData.GetTagTimedValuesMap(inputValues);
foreach (var item in tagDataMap)
{
// Following would generate hourly differences for the one Tag Id (item.Key)
// by first generating hourly data, and then finding the delta of that.
var hourlyValues = ProcessData.GetInterpolatedValues(item.Value, TimeSpan.FromHours(1));
// Consider the difference between Hour(1) and Hour(2).
// That is, 2 input values will create 1 output value.
// Now you must decide which of the 2 input times you use for the 1 output time.
// This is what I call the CalculationTimeBasis.
// The time basis used will be Auto, which defaults to the most recent for this particular method, e.g. Hour(2)
var deltaValues = ProcessData.GetDeltaValues(hourlyValues);
// Same as above except we explicitly state we want the most recent time, e.g. also Hour(2)
var deltaValues2 = ProcessData.GetDeltaValues(hourlyValues, ProcessData.CalculationTimeBasis.MostRecentTime);
// Here the calculated differences are the same except the now
// timestamp now reflects the earliest time, e.g. Hour(1)
var deltaValues3 = ProcessData.GetDeltaValues(hourlyValues, ProcessData.CalculationTimeBasis.EarliestTime);

Flutter : DateTime.now().toIso8601String() Math

i have DateTime.now().toIso8601String() = 2022-08-09T03:01:32.223255
how can i find if 3 days have passed since the date ?
You can parse string to DateTime object:
final dateTime = DateTime.now();
final stringDateTime = dateTime.toIso8601String();
final parsedDateTime = DateTime.parse(stringDateTime);
In this case, dateTime and parseDateTime are the same.
Then to find out the time difference use difference DateTime instance method, which returns Duration instance. Example from dart documentation:
final berlinWallFell = DateTime.utc(1989, DateTime.november, 9);
final dDay = DateTime.utc(1944, DateTime.june, 6);
final difference = berlinWallFell.difference(dDay);
print(difference.inDays); // 16592
Use the difference function on DateTime object:
DateTime now = DateTime.now();
DateTime addedThreeDays = DateTime.now().add(Duration(days: 3));
print(3 >= now.difference(addedThreeDays).abs().inDays);

How to do Geofence monitoring/analytics using KSQLDB?

I am trying to do geofence monitoring/analytics using KSQLDB. I want to get a message whenever a vehicle ENTERS/LEAVES a geofence. Taking inspiration from the [https://github.com/gschmutz/various-demos/tree/master/kafka-geofencing] I have created a UDF named as GEOFENCE, below is the code for the same.
Below is my query to perform join on geofence stream and live vehicle position stream
CREATE stream join_live_pos_geofence_status_1 AS SELECT lp1.vehicleid,
lp1.lat,
lp1.lon,
s1p.geofencecoordinates,
Geofence(lp1.lat, lp1.lon, 'POLYGON(('+s1p.geofencecoordinates+'))') AS geofence_status
FROM live_position_1 LP1
LEFT JOIN stream_1_processed S1P within 72 hours
ON kmdlp1.clusterid = kmds1p.clusterid emit changes;
I am taking into account all the geofences created in last 3 days.
I have created another query to use the geofence status from previous query to calculate whether the vehicle is ENTERING/LEAVING geofence.
CREATE stream join_geofence_monitoring_1 AS SELECT *,
Geofence(jlpgs1.lat, jlpgs1.lon, 'POLYGON(('+jlpgs1.geofencecoordinates+'))', jlpgs1.geofence_status) geofence_monitoring_status
FROM join_live_pos_geofence_status_1 JLPGS1 emit changes;
The above query give me the output as 'INSIDE', 'INSIDE' for geofence_status and geofence_monitoring_status columns, respectively or the output is 'OUTSIDE', 'OUTSIDE' for geofence_status and geofence_monitoring_status columns, respectively. I know I am not taking into account the time aspect, like these 2 queries should never be executed at same time say 't0' but I am not able to think the correct way of doing this.
public class Geofence
{
private static final String OUTSIDE = "OUTSIDE";
private static final String INSIDE = "INSIDE";
private static GeometryFactory geometryFactory = JTSFactoryFinder.getGeometryFactory();
private static WKTReader wktReader = new WKTReader(geometryFactory);
#Udf(description = "Returns whether a coordinate lies within a polygon or not")
public static String geofence(final double latitude, final double longitude, String geometryWKT) {
boolean status = false;
String result = "";
Polygon polygon = null;
try {
polygon = (Polygon) wktReader.read(geometryWKT);
// However, an important point to note is that the longitude is the X value
// and the latitude the Y value. So we say "lat/long",
// but JTS will expect it in the order "long/lat".
Coordinate coord = new Coordinate(longitude, latitude);
Point point = geometryFactory.createPoint(coord);
status = point.within(polygon);
if(status)
{
result = INSIDE;
}
else
{
result = OUTSIDE;
}
} catch (ParseException e) {
throw new RuntimeException(e.getMessage());
}
return result;
}
#Udf(description = "Returns whether a coordinate moved in or out of a polygon")
public static String geofence(final double latitude, final double longitude, String geometryWKT, final String statusBefore) {
String status = geofence(latitude, longitude, geometryWKT);
if (statusBefore.equals("INSIDE") && status.equals("OUTSIDE")) {
//status = "LEAVING";
return "LEAVING";
} else if (statusBefore.equals("OUTSIDE") && status.equals("INSIDE")) {
//status = "ENTERING";
return "ENTERING";
}
return status;
}
}
My question is how can I calculate correctly that a vehicle is ENTERING/LEAVING a geofence? Is it even possible to do with KSQLDB?
Would it be correct to say that the join_live_pos_geofence_status_1 stream can have rows that go from INSIDE -> OUTSIDE and then from OUTSIDE -> INSIDE for some key value?
And what you're wanting to do is to output LEAVING and ENTERING events for these transitions?
You can likely do what you want using a custom UDAF. Custom UDAFs take and input and calculate an output, via some intermediate state. For example, an AVG udaf would take some numbers as input, its intermediate state would be the number of inputs and the sum of inputs, and the output would be count/sum.
In your case, the input would be the current state, e.g. either INSIDE or OUTSIDE. The UDAF would need to store the last two states in its intermediate state, and then the output state can be calculated from this. E.g.
Input Intermediate Output
INSIDE INSIDE <only single in intermediate - your choice what you output>
INSIDE INSIDE,INSIDE no-change
OUTSIDE INSIDE,OUTSIDE LEAVING
OUTSIDE OUTSIDE,OUTSIDE no-change
INSIDE OUTSIDE,INSIDE ENTERING
You'll need to decide what to output when there is only a single entry in the intermediate state, i.e. the first time a key is seen.
You can then filter the output to remove any rows that have no-change.
You may also need to set cache.max.bytes.buffering to zero to stop any results being conflated.
UPDATE: suggested code.
Not tested, but something like the following code may do what you want:
#UdafDescription(name = "my_geofence", description = "Computes the geofence status.")
public final class GoeFenceUdaf {
private static final String STATUS_1 = "STATUS_1";
private static final String STATUS_2 = "STATUS_2";
#UdafFactory(description = "Computes the geofence status.",
aggregateSchema = "STRUCT<" + STATUS_1 + " STRING, " + STATUS_2 + " STRING>")
public static Udaf<String, Struct, String> calcGeoFenceStatus() {
final Schema STRUCT_SCHEMA = SchemaBuilder.struct().optional()
.field(STATUS_1, Schema.OPTIONAL_STRING_SCHEMA)
.field(STATUS_2, Schema.OPTIONAL_STRING_SCHEMA)
.build();
return new Udaf<String, Struct, String>() {
#Override
public Struct initialize() {
return new Struct(STRUCT_SCHEMA);
}
#Override
public Struct aggregate(
final String newValue,
final Struct aggregate
) {
if (newValue == null) {
return aggregate;
}
if (aggregate.getString(STATUS_1) == null) {
// First status for this key:
return aggregate
.put(STATUS_1, newValue);
}
final String lastStatus = aggregate.getString(STATUS_2);
if (lastStatus == null) {
// Second status for this key:
return aggregate
.put(STATUS_2, newValue);
}
// Third and subsequent status for this key:
return aggregate
.put(STATUS_1, lastStatus)
.put(STATUS_2, newValue);
}
#Override
public String map(final Struct aggregate) {
final String previousStatus = aggregate.getString(STATUS_1);
final String currentStatus = aggregate.getString(STATUS_2);
if (currentStatus == null) {
// Only have single status, i.e. first status for this key
// What to do? Probably want to do:
return previousStatus.equalsIgnoreCase("OUTSIDE")
? "LEAVING"
: "ENTERING";
}
// Two statuses ...
if (currentStatus.equals(previousStatus)) {
return "NO CHANGE";
}
return previousStatus.equalsIgnoreCase("OUTSIDE")
? "ENTERING"
: "LEAVING";
}
#Override
public Struct merge(final Struct agg1, final Struct agg2) {
throw new RuntimeException("Function does not support session windows");
}
};
}
}

Apache beam stream processing event time

I'm trying to make an event processing stream using apache beam.
Steps which happen in my stream:
Read from kafka topics in avro format & deserialize avro using schema registry
Create Fixed Size window (1 hour) with triggering every 10 min (processing time)
Write avro files in GCP dividing directories by topic name. (filename = schema + start-end-window-pane)
Now let's deep into code.
This code shows how I read from Kafka. I use custom deserializer and coder to deserialize properly using schema registry (in my case it's hortonworks).
KafkaIO.<String, AvroGenericRecord>read()
.withBootstrapServers(bootstrapServers)
.withConsumerConfigUpdates(configUpdates)
.withTopics(inputTopics)
.withKeyDeserializer(StringDeserializer.class)
.withValueDeserializerAndCoder(BeamKafkaAvroGenericDeserializer.class, AvroGenericCoder.of(serDeConfig()))
.commitOffsetsInFinalize()
.withoutMetadata();
In pipeline after reading records by KafkaIO is creating windowing.
records.apply(Window.<AvroGenericRecord>into(FixedWindows.of(Duration.standardHours(1)))
.triggering(AfterWatermark.pastEndOfWindow()
.withEarlyFirings(AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardMinutes(10)))
.withLateFirings(AfterPane.elementCountAtLeast(1))
)
.withAllowedLateness(Duration.standardMinutes(5))
.discardingFiredPanes()
)
What I want to achieve by this window is to group data by event time every 1 hour and trigger every 10 min.
After grouping by a window it starts writing into Google Cloud Storage (GCS).
public class WriteAvroFilesTr extends PTransform<PCollection<AvroGenericRecord>, WriteFilesResult<AvroDestination>> {
private String baseDir;
private int numberOfShards;
public WriteAvroFilesTr(String baseDir, int numberOfShards) {
this.baseDir = baseDir;
this.numberOfShards = numberOfShards;
}
#Override
public WriteFilesResult<AvroDestination> expand(PCollection<AvroGenericRecord> input) {
ResourceId tempDir = getTempDir(baseDir);
return input.apply(AvroIO.<AvroGenericRecord>writeCustomTypeToGenericRecords()
.withTempDirectory(tempDir)
.withWindowedWrites()
.withNumShards(numberOfShards)
.to(new DynamicAvroGenericRecordDestinations(baseDir, Constants.FILE_EXTENSION))
);
}
private ResourceId getTempDir(String baseDir) {
return FileSystems.matchNewResource(baseDir + "/temp", true);
}
}
And
public class DynamicAvroGenericRecordDestinations extends DynamicAvroDestinations<AvroGenericRecord, AvroDestination, GenericRecord> {
private static final DateTimeFormatter formatter = DateTimeFormat.forPattern("yyyy-MM-dd HH:mm:ss");
private final String baseDir;
private final String fileExtension;
public DynamicAvroGenericRecordDestinations(String baseDir, String fileExtension) {
this.baseDir = baseDir;
this.fileExtension = fileExtension;
}
#Override
public Schema getSchema(AvroDestination destination) {
return new Schema.Parser().parse(destination.jsonSchema);
}
#Override
public GenericRecord formatRecord(AvroGenericRecord record) {
return record.getRecord();
}
#Override
public AvroDestination getDestination(AvroGenericRecord record) {
Schema schema = record.getRecord().getSchema();
return AvroDestination.of(record.getName(), record.getDate(), record.getVersionId(), schema.toString());
}
#Override
public AvroDestination getDefaultDestination() {
return new AvroDestination();
}
#Override
public FileBasedSink.FilenamePolicy getFilenamePolicy(AvroDestination destination) {
String pathStr = baseDir + "/" + destination.name + "/" + destination.date + "/" + destination.name;
return new WindowedFilenamePolicy(FileBasedSink.convertToFileResourceIfPossible(pathStr), destination.version, fileExtension);
}
private static class WindowedFilenamePolicy extends FileBasedSink.FilenamePolicy {
final ResourceId outputFilePrefix;
final String fileExtension;
final Integer version;
WindowedFilenamePolicy(ResourceId outputFilePrefix, Integer version, String fileExtension) {
this.outputFilePrefix = outputFilePrefix;
this.version = version;
this.fileExtension = fileExtension;
}
#Override
public ResourceId windowedFilename(
int shardNumber,
int numShards,
BoundedWindow window,
PaneInfo paneInfo,
FileBasedSink.OutputFileHints outputFileHints) {
IntervalWindow intervalWindow = (IntervalWindow) window;
String filenamePrefix =
outputFilePrefix.isDirectory() ? "" : firstNonNull(outputFilePrefix.getFilename(), "");
String filename =
String.format("%s-%s(%s-%s)-(%s-of-%s)%s", filenamePrefix,
version,
formatter.print(intervalWindow.start()),
formatter.print(intervalWindow.end()),
shardNumber,
numShards - 1,
fileExtension);
ResourceId result = outputFilePrefix.getCurrentDirectory();
return result.resolve(filename, RESOLVE_FILE);
}
#Override
public ResourceId unwindowedFilename(
int shardNumber, int numShards, FileBasedSink.OutputFileHints outputFileHints) {
throw new UnsupportedOperationException("Expecting windowed outputs only");
}
#Override
public void populateDisplayData(DisplayData.Builder builder) {
builder.add(
DisplayData.item("fileNamePrefix", outputFilePrefix.toString())
.withLabel("File Name Prefix"));
}
}
}
I've written down the whole of my pipeline. It kind of works well but I have misunderstood (not sure) that I handle events by event time.
Could someone review my code (especially 1 & 2 steps where I read and group by windows) either it windows by event time or not?
P.S. For every record in Kafka I have timestamp field inside.
UPD
Thanks jjayadeep
I include in KafkaIO custom TimestampPolicy
static class CustomTimestampPolicy extends TimestampPolicy<String, AvroGenericRecord> {
protected Instant currentWatermark;
CustomTimestampPolicy(Optional<Instant> previousWatermark) {
this.currentWatermark = previousWatermark.orElse(BoundedWindow.TIMESTAMP_MIN_VALUE);
}
#Override
public Instant getTimestampForRecord(PartitionContext ctx, KafkaRecord<String, AvroGenericRecord> record) {
currentWatermark = Instant.ofEpochMilli(record.getKV().getValue().getTimestamp());
return currentWatermark;
}
#Override
public Instant getWatermark(PartitionContext ctx) {
return currentWatermark;
}
}
From the documentation present here [1] event time is used as the processing time by default in KafkaIO
By default, record timestamp (event time) is set to processing time in KafkaIO reader and source watermark is current wall time. If a topic has Kafka server-side ingestion timestamp enabled ('LogAppendTime'), it can enabled with KafkaIO.Read.withLogAppendTime(). A custom timestamp policy can be provided by implementing TimestampPolicyFactory. See KafkaIO.Read.withTimestampPolicyFactory(TimestampPolicyFactory) for more information.
Also processing time is the default timestamp method used as documented below
// set event times and watermark based on LogAppendTime. To provide a custom
// policy see withTimestampPolicyFactory(). withProcessingTime() is the default.
1 - https://beam.apache.org/releases/javadoc/2.4.0/org/apache/beam/sdk/io/kafka/KafkaIO.html

incompatible types found : double

i am trying to write a program, and the rest of the code so far works but i am getting a incompatible types found : double required :Grocery Item in line 38. Can anyone help me in explaining why I am receiving this error and how to correct it? Thank you. here is my code:
import java.util.Scanner;
public class GroceryList {
private GroceryItem[]groceryArr; //ARRAY HOLDS GROCERY ITEM OBJECTS
private int numItems;
private String date;
private String storeName;
public GroceryList(String inputDate, String inputName) {
//FILL IN CODE HERE
// CREATE ARRAY, INITIALIZE FIELDS
groceryArr = new GroceryItem[10];
numItems = 0;
}
public void load() {
Scanner keyboard = new Scanner(System.in);
double sum = 0;
System.out.println ("Enter the trip date and then hit return:");
date = keyboard.next();
keyboard.nextLine();
System.out.println("Enter the store name and then hit return:");
storeName = keyboard.next();
keyboard.nextLine();
double number = keyboard.nextDouble();
//NEED TO PROMPT USER FOR, AND READ IN THE DATE AND STORE NAME.
System.out.println("Enter each item bought and the price (then return).");
System.out.println("Terminate with an item with a negative price.");
number = keyboard.nextDouble();
while (number >= 0 && numItems < groceryArr.length) {
groceryArr[numItems] = number;
numItems++;
sum += number;
System.out.println("Enter each item bought and the price (then return).");
System.out.println("Terminate with an item with a negative price.");
number = keyboard.nextDouble();
}
/*
//READ IN AND STORE EACH ITEM. STORE NUMBER OF ITEMS
}
private GroceryItem computeTotalCost() {
//add code here
}
public void print() {
\\call computeTOtalCost
}
*/
}
}
"groceryArr[numItems] = number;"
groceryArr[numItems] is an instance of GroceryItem() - 'number' is a double
You need a double variable in your GroceryItem() object to store the 'number' value.