Sensor meeting audience 1

 

 

Introduction

Erik Tielemans, manager Research and Innovation Environmental Quality at RIVM Rijksinstituut voor Volksgezondheid en Milieu (Rijksinstituut voor Volksgezondheid en Milieu), opened the meeting organized by RIVM preceding this week’s FAIRMODE and CEN meetings in Utrecht, The Netherlands. The meeting had an informal character and aimed at exploring EU Europese unie (Europese unie) wide activities on air quality sensors and its state of the art. Air quality sensors are likely to have huge impact on monitoring strategies. With around 60 participants from 15 EU countries, from EPA’s, knowledge institutes and commercial institutes, it is clear that there is much interest in the subject.

Some highlights:

  • Citizens will use more and more sensors. Professionals should be prepared for this. Clear communication is key.
  • There is an urgent need for sharing information on the available sensor tests, calibration practices and algorithms.
  • Sensor calibrations pose impressive challenges.
  • Everyone agreed: we should promote and stimulate open (calibration) systems.
  • Sensors can benefit from embedding in AQ model systems. Though this requires quite some (model) work.
  • Is a niche available for sensors in combination with models?
  • A European community is desirable. The concept of FAIRMODE can serve as an example.
  • We should discuss the possibility of a sensor community with FAIRMODE, AQUILA and CEN communities.

All presentations are available in PDF at the bottom of this page. 

Presentations

Marita Voogt (RIVM Rijksinstituut voor Volksgezondheid en Milieu (Rijksinstituut voor Volksgezondheid en Milieu), The Netherlands) presented the roadmap for the innovation of air quality monitoring at RIVM. Until 2020 RIVMs undertakes research in order to establish a monitoring program to which various datasets can contribute. These will be a combination of reference monitors, sensors (including data from urban networks and individual citizens), satellite data and models. In cooperation with DCMR and GGD Gemeentelijke/gewestelijke gezondheidsdienst (Gemeentelijke/gewestelijke gezondheidsdienst) Amsterdam networks, RIVM is starting sensor performance tests in the field at six reference locations. The AirSensEUR is one of the sensors to test. For citizen science, RIVM developed a knowledge portal, prepares a data portal and takes part in pilot studies with citizens. Some of the other Dutch institutes that are active in the field of air quality sensors and citizen science introduced themselves: ECN, KNMI Koninklijk Nederlands Meteorologisch Instituut (Koninklijk Nederlands Meteorologisch Instituut)/GGD Amsterdam (Urban AirQ project) and TNO.

Brian Stacey (Ricardo, UK) presented the need for a good characterization of sensors, in order to be able to use the data for compliance assessment, trend analysis and input to models. Within a UNEP project, aiming at distributing sensors in African countries, a wish list for quality assurance and quality control was developed. Characterization might take place in three ways: 1) regularly against reference stations, 2) like in 1 but with only one sensor and apply its characterization  to the others or 3) only comparing the sensors against each other. Issues raised by Brian are a.o. the IP of calibration algorithms, insight in long term performance and will the assessment of sensors be able to keep up with the speed of their development?

Michel Gerboles (JRC, EU Europese unie (Europese unie)) presented the results of field and laboratory tests of gas sensors during the last three years at JRC. For electrochemical sensors, interference is the main problem; for metal oxide sensors it is stability. In lab tests correction algorithms via linear regression might do the work, but when one goes into the field this is no longer the case. Using artificial neural network techniques is a better option. The field circumstances at Ispra are high ozone and low NO2.
The present set of sensors for NO2, NO, ozone and CO in the AirSensEUR have been tested recently. The ozone filter applied by Alphasense to its NO2 B4 sensor seems to work well. There are however, effects of temperature and humidity. The work is presently under publication.

Petra Bauerova (Czech HydroMeteorological Institute - Tusimice Observatory, Czech Republic) presented their ambition to use sensors to measure vertical and horizontal transport of air pollution using drones. Drones can have a limited load and it load determines the flight time. She did some experiments with several sensors, a.o. IR camera, Cairpol gas sensors, Grimm and Palas Fidas PM sensors. The Cairclip O3/NO2 sum sensor showed a strong correlation with the reference method, largest deviations were found at low temperature and high humidity. The dewpoint deficit might be a good variable to be used for validation.

MariCruz Minguillon (IDAEA-CSIC, Spain) presented results of field tests at an urban background site in Barcelona for several gas and PM sensors. Some sensor nodes have reasonable performance.  Optical particle counters work better for particles <2,5 µm then for larger particles. An important finding is that sensor performance varies over time. So, frequent or even continuous validation with reference instrumentations is needed. Some improvement in sensor performance is required for sensor data to be used in air quality assessment. She mentioned the paper by Borrego et al. (2016) about the EuNetAir COST action comparison exercise at Aveiro, Portugal.

George Biskos (The Cyprus Institute, Cyprus & Delft University of Technology, The Netherlands) presented some fundamental research on the development of sensors for gasses and ultrafine particles. For gases, nanomaterials are being applied in order to gain higher sensitivity. For particles, the best way to go to cheap sensors is via optical particle counters. George poses the question whether it is time to get away from mass concentration and aim for number concentration sensor output. In that case, one also wants to measure smaller particles than is possible with optical particle counters (>0,3 µm). However, measuring submicron particles is very costly. Why not make a DMA out of plastic materials? The size distribution tests with their own 3d printed DMA are very promising. Another promising direction is working with electric fields within tubes (10 nm particles find it hard to go through).

Irena Jezek (Aerosol d.o.o., Slovenia) presented an aethalometer capable of discriminating between fossil fuel (e.g. diesel vehicles) and biomass sources (e.g. wood burning) of black carbon. It was used in a study in Ljubljana where both sources are abundant, at least in the winter season. Wood burning is a major concern of citizens that suffer from odour and particulate exposure. Being able to measure wood smoke with cheap sensors would be welcomed very much by them. However, there is not a simple solution for that.

Christof Asbach (IUTA, Germany) presented work on the low cost Sharp dust sensor. They coupled it to an Arduino Uno and tested it under laboratory conditions. It was tested for other purposes than ambient air (testing stability of test aerosol and aging of purifying filters). The Sharp sensor is able to give a reasonable good estimation of mass concentration, if mean particle size and refractive index are known. There was a strong dependence of calibration factors on size. The sensor, together with other types of sensors, will soon be used in a field test at the LANUV urban background station Mulheim Styrum. Besides, the sensors will be tested for use in workplace exposure monitoring.

Joost Wesseling (RIVM, The Netherlands) presented the development of a statistical framework for monitoring optimization, using a combination of modeling, reference monitors and lower cost sensors or diffusion tubes each with their own uncertainty. It can be used to assess the number of monitoring locations needed to reach a certain level of model calibration uncertainty. On the other hand, even improved model calibration is possible having less reference monitors and more low cost sensors. Of course, it depends on the uncertainty of the sensors. We need to know their bias, drift and random uncertainty. Some experimental work with the Alphasense NO2 B4F3 sensor showed the need for individual calibration. Quite like Brian Stacey, Joost presented three ways for that: 1) periodic check against reference stations, 2) in combination with Palmes tubes and 3) on the fly calibration using detailed hourly air quality maps.

Fransisco J. Gomez-Moreno (CIEMAT, Spain) presented sensor testing in the framework of the project TECNAIRE. The sensors were from Libelium, Smart Citizen and AeroQual. In the laboratory, gas cilinders and a dilution system were used. All NO2 and CO sensors suffered from ozone interference.
In the field, at the urban background station in Madrid, the sensor data were not directly available but through the database of the sensor providers. That probably explains the time lags. At least the temperature and humidity sensors worked well, but NO2 and CO were not that good. Besides, questions remain regarding stability and quantification limits.

Bino Maiheu (VITO, Belgium) presented a citizen science project in which 2000 (!) citizens measured the concentration of NO2 with a diffusive sampler during one month in Antwerp. http://www.curiezeneuzen.eu/. They learned that for citizen science air quality data to be used, calibration and meta data are two crucial issues. For example, different housings around the samplers turned out to have different calibration lines.
They used the dataset for validation of the IFDM-OSPM model chain for dilution of traffic emissions. It turned out to be difficult. Actually, using big datasets with high spatial and/or high temporal resolution puts requirements to the models as well. More detailed input data are needed, e.g. real time traffic, urban driving conditions and better street canyon representation.

Philippe Schneider (NILU, Norway) presented resultst of testing the AQMesh platform v3.5 within the framework of the CITI-SENSE project. Castell et al. (2017) recently published a paper, and more papers are to follow. It turned out that quite good results were obtained in the laboratory, but in the field results were not that good. For NO it was not bad, the expanded uncertainty of some sensors even met the data quality objective criteria for indicative measurement. However, for the other gases (NO2, O3, CO) it was really poor.  The optical particle counters were very sensitive for relative humidity and fog.
Also, repeatability (between individual sensors) seemed to be a challenge. For example, the sensitivity to temperature varied between them. Furthermore, calibration results depended on type of site. This leads to the conclusion that sensors always need good in situ calibration. For that, more sophisticated techniques than linear regression are needed (e.g. machine learning).

Jan Vonk (RIVM, The Netherlands) & Michel Gerboles (JRC, EU) had a duo presentation on the AirSensEURs. RIVM is one of the EPAs that is actively involved in testing the AirSensEUR sensor platform developed by JRC. RIVM presently has 12 AirSensEUR platforms. During the CINDI campaign at regional supersite Cabauw, they were first tested, results to follow. Now the platforms are distributed over six reference stations in the Netherlands for a year round co-location of measurements.
Michel explained that the AirSensEUR was developed as an open system, INSPIRE compatible. The idea was to have an open system in which people could click in sensors themselves, but up to now most users want to have a full working system.
In 2016, a training was organized at Ispra, choices were made for the gas sensors to be included and afterward a calibration exercise was carried out in the laboratory (to be published soon). Higher math than linear regression might be needed for calibration analysis.
JRC works on the automation of data handling (filtering, validation). There is a smart phone application called SenseEURAir.  Michel expects a new training at the end of spring. The outlook for the future further contains the integration of other sensors (e.g. the Alphasense OPC) and active sampling, new deployments in member states and pilot tests in other domains.
An important notice is that the AirSensEUR is not a commercial product, so troubleshooting comes together with users.

Marek Rosicki (Atmoterm, Poland) presented first tests with low cost PM sensors in cities in Poland. Although their core business is model-based air quality management systems, they see the need for adding low cost sensor networks. Poland suffers from high PM levels, there is a need for additional urban air quality data (spatial and temporal).
They started with Dylos, but shifted to a 6 channel Chinese optical particle counter that does not contain any drying or heating. They looked at normalization, correction for temperature and humidity and calibrated the sensors against state monitoring data. A sensor network was applied to two cities/regions in October and January. Concentration levels varied between these periods, in January they were very high. During both campaigns, sensor performance was quite good, although it sometimes overestimates or underestimates peak values.

Philippe Schneider (NILU, Norway) had an additional presentation on the use of sensor data in mapping urban air quality. It is rather on outlook to the future: what is possible if we are able to correct sensors properly, filter out outliers, get rid of drift, etc. It is possible to combine sensor data with dispersion or land use regression models. Using techniques for data assimilation are seen as an overkill at this stage, rather they used ‘data fusion’: a statistical approach to combine hourly sensor data with a yearly averaged concentration basemap. This results in hourly maps for NO2, PM10/PM2,5, with local adjustments to the static map. Maps were made within the CITI-SENSE framework for Oslo and Barcelona, based on AQMesh sensor data. The fused maps were compared with data from reference AQ monitoring stations. Averaged over all stations, results were quite good.  They simulated network sizes needed to reach a saturation level: more locations would not yield better maps. For a simple case in Oslo (not transferable!), the number was around 50.
It is all about using the "swarm knowledge" of the entire network!

Hannamari Jaakkola (Vaisala, Finland) explained that Vaisala decided that they will go into the market of low cost air quality sensors, next to their core business of meteorological equipment. The pollution situation in China and the interaction between air quality and meteorology were drivers for that decision. They started an air quality sensors company in Finland in September 2016. Using Alphasense gas sensors they present the AQT410. Another version (AQT420) also contains 16 bins laser PM sensors. Algorithms are being developed now by co-located measurements in Finland and China.

Sensor meeting audience 2

 

Discussion

In two break out groups, several topics related to the subject of the potential of low-cost sensors for air quality management were discussed.

Why would we use 'low cost' sensors?

The technical capability of an individual sensor is limited because of its large uncertainty. However, in a network the calibration uncertainty can be reduced such that the measurements have considerable certainty and enable high resolution measurements.

How much are we willing to pay for ‘low cost’ sensors?

The costs of sensors are not limited to the purchase of the sensor only; also, electronics/hardware should be considered, plus costs for validation and maintenance. In general, we would be willing to purchase ‘low cost’ sensors when they are cheaper than reference instruments. However, when higher resolution coverage of measurements is achieved, relatively higher costs are acceptable.

Informing each other about results of sensor tests

Normally, publication in peer-reviewed papers takes a long time. Once publications are out, the sensors themselves have newer versions. We need ways to spread the knowledge sooner. An EU Europese unie (Europese unie) wide knowledge or data portal might be helpful for that.

Need for harmonized test guidelines

CEN WG42 works on that, but this process will take some years from now. We need something sooner. Can the AirSensEUR community take a role here? AirMonTech like.
How can the FAIRMODE Deltatool help out here? E.g. it presents results always in the same visualization (harmonized graphs and colours).
Both governmental bodies and manufacturers should participate.
The national reference labs of MS should have the duty to perform the tests.

Wish for better sensors and open systems

Most ‘low cost’ sensors are, to a certain extent, a black box. In order to use the sensor for air quality measurements or even include them in official measurements, we need to understand the sensor. Testing is crucial, but we should also have access to the exact software and hardware configuration of the sensor. Algorithms should be developed in the community and made publically available.
There is consensus that we should promote the open system, although black boxes are acceptable after quality assurance program. Companies put a lot of investment in developing algorithms, so they might not be willing to open up. However, when just starting this process (AirSensEUR!), it may stimulate them to become more open. Should we wait for better sensors? No, we need to start working with what is available: it will enhance further improvements. Companies have to sell products to improve them. We should specify in detail what we want.

Wish for other low-cost sensors than PM and NO2

  • Ammonia
  • Black carbon (wood burning), ultrafine particles.

How about citizen measurements?

Using sensors is not foreseen in the Air Quality Directive, but at the same time there is pressure from the citizens. Citizens start to measure their environment more and more. They ask questions and turn to us for answers. As national institutes, we should gain and share the knowledge to facilitate citizens in their understanding of sensors and measurements. Our understanding of sensors is also important in view of these societal requests.
Dealing with citizens low quality data might ask for cloud databases and machine learning techniques. This has a risk that people do not recognize their own data afterwards.
LANUV and RIVM Rijksinstituut voor Volksgezondheid en Milieu (Rijksinstituut voor Volksgezondheid en Milieu) will allow people to put their sensor at their station for co-located measurements.
It is RIVM’s experience that people ask for our support. It does not have to take a lot of resources. We try to give minimum support at individual basis and exchange knowledge publically. Some citizens even like to do the data analysis themselves.
What if there is a gap between official measurements, citizen measurements and models?
Swarm analysis gives you some control over what's coming out at the end but not for individual measurements. We have to experience it and find ways to deal with it. Good expectation management is very important.
How to know the quality of the citizen data? Within a data portal, we might ask metadata and award them with stars. Citizens can earn more stars for better sensors. At the same time, we must not scare people that they have to have a certain number of stars: every star data is welcome, we only handle them different.

Collaboration and communication

Is there a need for air sensors network? Indeed, there is a need for a combination of measurement/modelling.
Communication with FAIRMODE and AQUILA is very important.
What will be the focus? Stationary, mobile sensors? Technically we can do both.
Can we meet once every year, coupled to another meeting? There should be something to gain in the meetings? We must focus on how to improve what we do, on developing methodologies.
We need to find resources for that.