UnoViS: The MedIT Public Unobtrusive Vital Sign Database
Welcome to UnoViS!
While PhysioNet is a large database for standard clinical vital sign measurements , such a database does not exist for unobtrusively measured signals. This hinders progress in the vital area of signal processing for unobtrusive medical monitoring as not everybody owns the specific measurement systems to acquire such signals. Furthermore, if no common database exists, a comparison between different signal processing approaches is not possible. This gap will be closed by our UnoViS database. It contains different measurements in various scenarios ranging from a clinical study to measurements obtained while driving a car. Currently, 145 records with a total of 16.2 hours of measurement data is available, which is provided as MATLAB files or in the PhysioNet WFDB file format. In its initial state, only capacitive ECG and unobtrusive PPG signals are, together with a reference ECG, included. The dataset from the clinical study contains clinical annotations. Additionally, supplementary functions are provided, which simplify the usage of our database and thus the development and evaluation of new algorithms. The development of urgently needed methods for very robust parameter extraction or robust signal fusion in view of frequent severe motion artifacts in unobtrusive monitoring is now possible with our database.
An example for the severe motion artifacts and the opportunities of a multichannel unobtrusive monitoring is shown in the followin figure. While the channel cecg1 has a very low amplitude of the R-peaks even after recovering from a severe artifact, channel cecg2 shows larger amplitudes of the R-peaks. Hence, it would be beneficial to use this channel for parameter extraction. Furthermore, the optical channels opt1 and opt3 recover even faster and would thus allow to increase the time in which a reliable heart rate could be estimated. This example clearly shows the opportunities of multichannel unobtrusive monitoring. Our database shall stimulate the research of very robust signal processing algorithms like peak detection, heart rate estimation or sensor fusion, in view of severe and frequent motion artifacts , as these methods are a key aspect for the success of unobtrusive, ubiquitous medical monitoring.
The database is completely free and can be used for any purpose. However, we kindly ask to cite this publication if the database and / or its supplementary functions are used:
T. Wartzek, M. Czaplik, C. Hoog Antink, B. Eilebrecht, R. Walocha and S. Leonhardt: "UnoViS: The MedIT Public Unobtrusive Vital Sign Database", Health Information Science and Systems. 2015;3:2. http://www.hissjournal.com/content/3/1/2
Currently, 145 records with a total of 16.2 hours of measurement data is available, which is provided as MATLAB files or in the PhysioNet WFDB file format. Additionally, supplementary functions are provided, which simplify the usage of our database and thus the development and evaluation of new algorithms. The data is available as a MATLAB mat-file or in several files per record in accordance to the PhysioNet WFDB (WaveForm DataBase) file format.
The origin of the data is manifold and currently consists of three application scenarios: measurements of a clinical study , measurements while the subject is driving a car ,  and measurements while the subject is lying in bed . Additionally, two records show the maximum measurement quality of our latest system if conditions are optimal. Since the measurements were acquired over a long time period of several years, slightly different (improved) measurement systems were used. In all scenarios, the monitored subjects wore their normal clothes.
The database consists of records which represent one measurement each. That means, a record contains, for example, one measurement of one patient in case of the clinical study, or one driver driving in a specified scenario in case of the driving tests and so on. It should be pointed out that several records may exist for the same subject in the driving or lying in a bed scenario. However, these records are of course labeled with the same unique subject ID but different record IDs.
The structure of each record in case of the MATLAB file is given in Table I. The presented structure of one record is saved as a struct. Each record consists of several fields such as an unique id, the duration of the measurement, the measurement scenario measScenario, information about the subject and several channels containing the actual measurement data and annotations ann.
The field ann is an array of structs within each channel. It contains the class (e.g. peaks or rhythm), the subclass which may be event based (e.g. in case of the class peaks) or interval based (e.g. in case of the clinical study the clinicians analyzed an interval of 5 s), the source (e.g. manual by medical experts or automatic by the open source ECG detector OSEA ) and the location loc of the annotation in samples. The type of the annotation(s) depends on the annotation class and subclass and is further elucidated in Table II. This table also shows all currently available annotations. All datasets contain detected peaks from OSEA or by a medical expert. In case of the measurements from the clinical study, this dataset contains the following annotations for the cECG as well as for the reference ECG. The diagnosed rhythm and if extrasystoles extrasys or a bundle branch block bbb are present. If the two clinicians could not clearly identify a parameter or if their results differed, it is denoted with NaN or ’-2’. Furthermore, the heart rate and different time durations such as PQ, QRS and QT time are given. Here, the mean value of the two clinicians’ results and the relative difference are given. Again, if one parameter, e.g. the PQ time, could not be clearly estimated, it is denoted with NaN.
The supplementary MATLAB functions can be found here:
|Matlab Functions / Examples||Download|
|for *.mat files|
|for wfdb files|
A Synthesizer Framework for multimodal cardiorespiratory signals
In , our group has published a synthesizer for the generation of multimodal cardiorespiratory signals. Below, a MATLAB demonstration can be downloaded for free. This code was used to generate Figure 14 of the original publication. As the UnoViS database, this code is completely free but we kindly ask you to cite the original publication:
C. Hoog Antink, S. Leonhardt, and M. Walter: “A Synthesizer Framework for Multimodal Cardiorespiratory Signals”. Biomedical Engineering and Physics Express, June 2017
Motion Artifact Quantification and Sensor Fusion for Unobtrusive Health Monitoring
A multisensor setup for unobtrusive vital sign estimation was published by our group in . Here you can download the data of the “Motion Sequence” (UnoViS_motion2017) and the video sequence “Video Sequence” (UnoViS_video2017). The data format is the same as the UnoViS-database. We currently do not provide the data in WFDB format but we can convert it for you if necessary. As with the original database, the data is completely free but we kindly ask you to cite the original publication:
C. Hoog Antink, F. Schulz, S. Leonhardt, and M. Walter: “Motion Artifact Quantification and Sensor Fusion for Unobtrusive Health Monitoring”. Sensors, December 2017
|Dataset||Signaltypes||# of Records||Total Length||mat-file|
|UnoViS_motion2017||See ||9||81 min|
|UnoViS_video2017||See ||7||7.1 h|
Signal-Level Fusion with Convolutional Neural Networks for Capacitively Coupled ECG in the Car
In , we presented an algorithm for beat-detection in multi-channel capacitively coupled ECG. This algorithm is written in Python and makes use of the “Keras” as well as the “TensorFlow” package (among others). To use this demo, you also need to download the “UnoViS_auto2012” MAT-file you find above. As the UnoViS database, this code is completely free but we kindly ask you to cite the original publication:
C. Hoog Antink, E. Breuer, D. U. Uguz, and S. Leonhardt: “Signal-Level Fusion with Convolutional Neural Networks for Capacitively Coupled ECG in the Car”. Computing in Cardiology 2018;45: accepted for publication
If you have any questions, please contact us at: firstname.lastname@example.org
 A. L. Goldberger, Amaral, L. A. N., L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E. Stanley, “PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals,” Circulation, vol. 101, no. 23, pp. e215–e220, 2000.
 M. Czaplik, B. Eilebrecht, R. Walocha, M. Walter, P. Schauerte, S. Leonhardt, and R. Rossaint, “The reliability and accuracy of a noncontact electrocardiograph system for screening purposes,” Anesthesia and Analgesia, vol. 114, no. 2, pp. 322–327, 2012.
 B. Eilebrecht, T.Wartzek, J. Lem, R. Vogt, and S. Leonhardt, “Capacitive electrocardiogram measurement system in the driver seat,” Automobiltechnische Zeitschrift (ATZ), vol. 113, no. 3, pp. 232–237, 2011.
 T. Wartzek, B. Eilebrecht, J. Lem, H.-J. Lindner, S. Leonhardt, and M. Walter, “ECG on the Road: Robust and Unobtrusive Estimation of Heart Rate,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 11, pp. 3112–3120, 2011.
 T. Wartzek, C. Brüser, T. Schlebusch, C. Brendle, S. Santos, A. Kerekes, K. Gerlach-Hahn, S. Weyer, K. Lunze, C. Hoog-Antink, and S. Leonhardt, “Modeling of Motion Artifacts in Contactless Heart Rate Measurements,” in Computing in Cardiology (CinC 2013), 22 - 25 Sep 2013.
 P. S. Hamilton, “Open Source ECG Analysis Software Documentation,” 2002.
 C. Hoog Antink, S. Leonhardt, and M. Walter: “A Synthesizer Framework for Multimodal Cardiorespiratory Signals”. Biomedical Engineering and Physics Express, June 2017
 C. Hoog Antink, F. Schulz, S. Leonhardt, and M. Walter: “Motion Artifact Quantification and Sensor Fusion for Unobtrusive Health Monitoring”. Sensors, December 2017
 C. Hoog Antink, E. Breuer, D. U. Uguz, and S. Leonhardt: “Signal-Level Fusion with Convolutional Neural Networks for Capacitively Coupled ECG in the Car”. Computing in Cardiology 2018;45: accepted for publication