Technical annex of indoor localization and tracking – EvAAL Competition Important: this version of the annex will be refined with the feedback of the competitors. Refined versions will be timely distributed to the competitors by means of the This e-mail address is being protected from spambots. You need JavaScript enabled to view it mailing list. Technologies Each team should implement a localization system to cover the area of the Living Lab, without limits to the number of devices. Localization systems can be based on different technologies, including (but not limited to): measurements on radio communications (for example RSS, angle of arrival etc.) based on standard radios (such as IEEE 802.11, IEEE 802.15.4, or IEEE 802.15.1), RFID, or ultrawideband, infrared sensors, active infrared break beams, ultrasound, camera and optical systems etc.. The proposed systems may also include combinations of different technologies. Other technologies may be accepted provided they are compatible with the constraints of the hosting living lab. To this purpose competitors wishing to check such compatibility may inquire with the organizers by e-mail ( This e-mail address is being protected from spambots. You need JavaScript enabled to view it ). Moreover, this year competition includes the possibility of exploiting the context information provided by the Living Lab (such as opening/closing doors, switching on/off the light etc..), in order to refine the proposed localization system (details are given in the refinement of this document). The teams should consider possible restrictions related to the availability of power plugs, cable displacement, attachment of devices to walls/furniture in the Living Lab, etc. The requirements of the proposed localization systems should be communicated at an early stage in order to make the necessary on-site arrangements. However the Technical Program Committee (TPC) may exclude localization systems if their deployment is incompatible with the living lab constraints. Competitors are requested to intergrate their solution with our logging and benchmarking system. The integration will be guided in details and a competitor's integration package will be delivered for the purpose. The actual details of this integration are given in the refinement of this document. Benchmark Testing The score for measurable criteria for each competing artefact will be evaluated by means of benchmark tests (prepared by the organizing committee). For this purpose each team will be allocated a precise time slot at the living lab, during which the benchmark tests will be carried out. The benchmark consists of a set of tests, each of which will contribute to assessment of the scores for the artefact. The EC will ensure that the benchmark tests are applied correctly to each artefact. The evaluation process will also assign scores to the artefact for the criteria that cannot be assessed directly through benchmark testing. When both benchmark testing and EC evaluation have been completed, the overall score for each artefact will be calculated using the weightings shown above. All final scores will be disclosed at the end of the competition, and the artefacts ranked according to this final score. The time slot for benchmark testing is divided into three parts. In the first part, the competing team will deploy and configure their artefact in the living lab. This part should last no more than 60 minutes. In the second part, the benchmark will be applied. During this phase the competitors will have the opportunity to perform only short reconfigurations of their systems. In the last part the teams will remove the artefact from the living lab in order to enable the installation of the next competing artefact. Competing teams which fail to meet the deadlines in parts 1 and 3 will be given the minimum score for each criterion related to the benchmark test. Furthermore, artefacts should be kept active and working during the whole second part. If benchmark testing in the second part is not completed, the team will be awarded a minimum score for all the missing tests. During the second part, the localization systems will be evaluated in three phases:
1. There is no requirement of localising the disturber. 2. Only the actor can carry equipment provided by the competitor. 3. The actor will start moving at least 5 seconds before the disturber. 4. Both the actor and the disturber may generate contextual events: if any, the first such event will be generated by the actor. 5. When the disturber generates an event, if any, the actor will be at least 2 metres away. Evaluation criteria: In order to evaluate the competing localization systems, the TPC will apply the evaluation criteria listed in this document. For each criterion, a numerical score will be awarded. Where possible the score will be measured by direct observation or technical measurement. Where this is not possible, the score will be determined by the Evaluation Committee (EC). The EC will be composed of some volunteer members of the Technical Program Committee TPC, and will be present during the competition at the Living Lab. The overall score will be estimate summing each weighted evaluation criteria that are: 1. Accuracy [weight 0.25] – each produced localization sample is compared with the reference position and the error distance is computed. During the first phase of the competition the user will stop (after a predefined walk equal for all competitors) 30 seconds in each Area of Interest (AoI). Accuracy in this case will be measured as the fraction T of time in which the localization system provides the correct information about:
The score is given by: Accuracy score = 10*T For the last 2 phases, the stream produced by competing systems will be compared against a logfile of the expected position of the user. Specifically, we will evaluate the individual error of each measure (the Euclidian distance between the measured and the expected points), and we will estimate 75th percentile P of the errors. In order to produce the score, P will be scaled in the range [0,10] according to the following formula: Accuracy score =10 if P Accuracy score =4*(0.5-P)+10 if 0,5m < P < 2 m Accuracy score =2*(4-P) if 2 m < P < 4 m Accuracy score = 0 if P > 4 m The final score on accuracy will be the average between the scores obtained in all the phases. 2. Installation complexity [weight 0.15] – a measure of the effort required to install the AAL localization system in a flat, measured by the evaluation committee as a function of the person-minutes of work needed to complete the installation (The person-minutes of the first installer will be fully included; the person-minutes of any other installer will be divided by 2) The time T is measured in minutes from the time when the competitors enter the living lab to the time when they declare the installation complete (no further operations/configurations of the system will be admitted after that time), and it will be multiplied by the number of people N working on the installation. The parameter T*N will be translated in a score (ranging from 0 to 10) according with the following formula: Installation Complexity Score = 10 if T*N <=10 Installation Complexity Score = 10 * (60-T*N) / 50 if 60 = 10 Installation Complexity Score = 0 if T*N >60 3. User acceptance [weight 0.25] – expresses how much the localization system is invasive in the user’s daily life and thereby the impact perceived by the user; this parameter will be evaluated by the evaluation committee following predefined criteria pubished in the refinement of this annex. 4. Availability [weight 0.2] – fraction of time the localization system was active and responsive. It is measured as the ratio A between the number of produced localization data and the number of expected data: competing systems are expected to provide one sample every half a second. Excess samples are discarded. The values of availability A will be translated into a score (ranging from 0 to 10) according to the following formula: Availability score = 10 * A 5. Interoperability with AAL systems [weight 0.15] – This parameter evaluates the degree of interoberability of the solution in terms of openness of the SW, adoption of standards for both SW and HW, replaceability of parts of the solution with other ones. The actual implementation of the metric is described in the refinement of this annex.
SettingThe Smart House Living Lab is located at the Escuela Técnica Superior de Ingenieros de Telecomunicación of the Universidad Politecnica de Madrid (ETSIT UPM), in Madrid, Spain. (www.etsit.upm.es) The complete address is: Escuela Técnica Superior de Ingenieros de Telecomunicación (ETSIT) GPS: 40.45236386045641,-3.7273263931274414 Smart House Living Lab location map The site is easily reachable with the Madrid's underground system, the nearest metro station is Ciudad Universitaria. For logistic information please download this file. Do you need to buy a last minute condenser? a cable? We have created a list of electronic components shops reachable from our lab. Smart House Living Lab Plant view with reference coordinates (click the image for bigger map)
The localizable area is identified by the grey area of the figure above.
InfrastructureRooms:
Only bathroom, common space and the porch will be localizable. The actual localizable space and its reference coordinates can be downloaded here. Ceiling, floor and walls
Ceiling has not a fixed height. The lowest part,where the porch and the entrance are, is 2,32 meters high. From the porch to the first girder the ceiling will be inclinated until reaching the height of 2,62 meters.
From that point on the ceiling is 2,62 meters high.
The transversal diagram of the structure is visible here.
The best option for fixing appliances to celinig and walls is blue tack.
We have also successfully used adhesive velcro in other occasions.
In case, it is necessary, the tiles of the ceiling can be lifted. Tiles are 57 cm x 57 each.
Communication
Competitors will have to send their localization samples through the wifi or the ethernet connection. Sensors and actuators connected rhrough KNX
Competitor will receive contextual events coming from the light switches and a stationary bike (which is NOT KNX). The coordinates of these appliances is published here. Other equipment
Computing
Software:
Household appliances
Photo of the kitchen of the Living Lab |