Important: this version of the annex may be refined with the feedback of the competitors and distributed to the competitors by means of the This e-mail address is being protected from spambots. You need JavaScript enabled to view it mailing list and the call web site. Refinements will be documented in the Refinement of Organisation and Testing Procedures document.
Technologies Each team should implement a localization system to cover the area of the Living Lab, without limits to the number of devices. Technologies will be accepted based on their compatible with the constraints of the hosting living lab. Localization systems can be based on different sensing modalities, including (but not limited to):
as well as different measurements methods (e.g. RSS, TOF, AOA,…). The proposed systems may also include combinations of different technologies. Moreover, the competition includes the possibility of exploiting existing context information provided within the Living Lab (such as opening/closing doors, switching on/off the light etc..), in order to refine the proposed localization system. Competitors are requested to integrate their solution with the existing logging and benchmarking system. The integration will be guided in details and a competitor's integration package will be delivered for the purpose. All details about integration are given in the Living Lab and Technical Infrastructure document. Teams should consider restrictions related to e.g. the availability of power plugs or attachment of devices and cable to walls/furniture in the Living Lab. Specific requirements of the proposed localization system should be communicated at an early stage in order to make the necessary on-site arrangements. For technical inquiries please e-mail This e-mail address is being protected from spambots. You need JavaScript enabled to view it .
Benchmark Testing The score for each competing artefact will be evaluated by means of benchmark tests during a precise time slot at the living lab. The benchmark consists of a set of tests, each of which will contribute to assessment of the scores for the artefact. The time slot for benchmark testing is divided into three parts. In the first part, the competing team will deploy and configure their artefact in the living lab. This part should last no more than 60 minutes. In the second part, the benchmark will be applied (During this phase the competitors will have the opportunity to perform only short reconfigurations of their systems). The localization systems will be evaluated in three phases:
Further details about the paths might be disclosed to all competitors in advance. Moreover, during all the phases, the actor can move the appliances, in order to make the scenario more close as possible to the real life scenario. In the last part the teams will remove the artefact from the living lab in order to enable the installation of the next competing artefact. For the failure to meet deadlines or upon non-completion of tests teams will be awarded a minimum score for part 1 or 3 or the missing tests respectively.
Evaluation criteria: In order to evaluate the competing localization systems, the TPC will apply the evaluation criteria:For each criterion, a numerical score will be awarded.
For each criterion, a weighted numerical score will be awarded and added to the overall score. Where possible the scores will be measured by direct observation or technical measurement. Where this is not possible, the score will be determined by the Evaluation Committee (EC). The EC will be composed of some volunteer members of the Technical Program Committee TPC, and will be present during the competition at the Living Lab. The EC will ensure that the benchmark tests are applied correctly to each artefact. Once both benchmark testing and EC evaluation are completed, the overall score for each artefact will be calculated using the weightings shown above. All final scores will be disclosed at the end of the competition, and the artefacts ranked according to this final score. A detailed description of the evaluation criteria can be found in the Detailed Evaluation Criteria document. . |