[GC] Q&A - Update for 2017.01.30

Jerzak, Zbigniew zbigniew.jerzak at sap.com
Mon Jan 30 21:46:08 UTC 2017

Dear All, 

Over the last few days we have accumulated several questions either posted to this mailing list or sent directly to us. In order to streamline the process this is a cumulative question answering post. Please feel free to reply if you have any additional open issues. Please note that a few questions have not been answered in this post – we are still clarifying the best possible answer to those – please expect an update soon.

Q: Where can I download the ontology files?
A: The ontology files are available in  https://ckan.project-hobbit.eu/dataset/debs-grand-challenge-2017

Q: How can the hobbit platform be used to implement the Grand Challenge solution?
A: The page https://github.com/hobbit-project/platform/wiki/Upload%20a%20system contains detailed information about the way solutions should be implemented.Moreover, the page https://github.com/hobbit-project/platform/wiki/Develop%20a%20system%20adapter contains description of how to develop an adapter. Finally the following page has additional information for those developing in Java: https://github.com/hobbit-project/platform/wiki/Develop%20a%20component%20in%20Java 

Q: Where from should a solution read the input data. 
A: Solutions should read input data from dataQueue.

Q: How to develop the system adapter? 
A: The page https://github.com/hobbit-project/platform/wiki/Develop%20a%20system%20adapter contains the description of how to develop an adapter. Moreover we will soon provide a “hello world” example to streamline the process.

Q: Does the platform provide any support for RDF processing (i.e. sparql etc.) ? Or do we have to use our own libraries to process the RDF data?
A: Platform does not provide any prepackaged libraries.

Q: Where can I obtain sample input data from?
A: Sample data can be downloaded here: ftp://hobbitdata.informatik.uni-leipzig.de/DEBS_GC/ We will also provide corresponding output data soon - so that an offline analysis of the solution corrcetness can be performed.

Q: How can I check the correctness of my solution before uploading it to the evaluation platform?
A: We will soon publish both input and expected output data under the following link: ftp://hobbitdata.informatik.uni-leipzig.de/DEBS_GC/ 

Q: Which clustering algorithm are we supposed to use for clustering the data?
A: The Grand CHallenge web page provides a detailed description of the algorithm which is a modified version of the k-means algorithm.

Q: Are the input RDF triples sorted according to their timestamps?
A: Yes.

Q: Should the output of the solution be sorted according to the timestamps?
A: Yes.

Q: Are the values in the declaration of (OvservationGroup_(value_1), Observation_(value_2), Output_(value_3), Value_(value_3) unique through all the streaming data?
A: Yes. You can use these values to link the observationGroup with its relevant information tree.

Q: What is the type of the input data provided in the input queue?
A: RDF triples encoded to array of bytes with UTF-8 encoding.

Q: What is the expected output of the grand challenge query?
A: The expected output format is described on the Grand Challenge page. Please note that the output data stream should be ordered by the application timestamp.

Q: Is the number and type of sensors fixed?
A: The number and type of sensors is specified in the metadata. The number and type of sensors does not change during runtime.

Q: How should anomalies be detected?
A: As a sequence of at most N transitions that occurs with a combined probability lower than a predefined threshold?

Q: What are the criteria for evaluation?
A: The evaluation criteria are twofold. First the correctness of each solution is checked and subsequently all correct solutions are ranked based on the formula specified in the “Evaluation Procedure and Criteria” Section.


2017 DEBS GC Organizers 

More information about the Gc mailing list