Distributed intelligence (or blockchain intelligence, as the recent trends insist to refer) is another valid paradigm of “less is more.” During the recent past decades, data collection and management have become a mainstream research topic.
As integrated microchips became cheaper and quite reliable, monitoring devices are currently booming in the entire market. Of course, it is more than evident that to extract information solid datasets are required, however:
- “Is there a need for computationally expensive modules where detailed and tedious data exploitation strategies are deployed?”
- “Do we really need Big Data to achieve adequate situational awareness?
- “Is adequate situational awareness enough?”
Such questions may sound contradictory, but our attempt is to discuss these issues while thinking outside of the box in a non-mainstream manner. The principle of collecting and storing huge amounts of structured, semi-structured, or even unstructured (e.g. multimedia) data may be linked with the concept of achieving a deeper and more detailed overview on conceptual dynamics. However, “more” is not always “more efficient,” in fact, there is a thin line between “more” and “enough,” which, in most modern concepts, is severely violated without any obvious reason. “More” is usually aligned with human perception of being able to perform better and more efficient. Efficiency is strongly linked to better performance though. Which raises the next question:
- “What does better performance mean?”
- “How do you and how should you measure performance?”
Operational performance definition may vary from case to case, however, everybody should agree on a common – probably abstract – principal definition:
“Performance can be defined as a metric which represents the total amount of resources required to achieve a certain goal.”
An example, among several others, of big data management and exploitation fever, can be observed in Building Energy Management applications. Huge efforts and manpower resources have been spent on elaborate building modeling towards extracting errorless objects/instances able to “accurately” emulate building dynamics. Of course, it is evident that the level of modeling detail affects “application performance” and should be balanced with the available resources since highly complex interplays among heterogeneous with constantly diversifying dynamics entities are involved (e.g. weather conditions, occupancy and usage habits, material and equipment aging effects, etc.).
The abstraction level of model-assisted Building Management concepts should always be compromised with computational and manpower available resources required to achieve a certain level of control and energy performance. Building ecosystems, as happens in all real-life cases, do age and change – even if such changes are not visible or explicitly observable – their behavior according to their “life-experiences.” Based on the latter, someone could reasonably argue on modeling inadequacies in emulating long-term behaviors and dynamics. Eventually a model recalibration – if not a redesign from scratch – will be required. As a result, the main question which arises is whether elaborate building models, which consume certain amounts of manpower during the design and recalibration stages, are needed to achieve efficient Building Energy Management.
On the other hand, several Building Energy Management approaches attempt to simplify building dynamics by utilizing explicitly linear or piece-wise linear (set of linear equations which “summit” in order to catch the building dynamics evolution). Some could reasonably argue that these approaches are quite efficient during the design phases, however, modeling inadequacies and simplifications usually result in poor performance during operation, where a non-linear quite complex ecosystem is treated as a linearized one for “convenience” reasons. As a result, the main question which arises is whether simplified building models, which consume close-to-zero resources during the design and recalibration stages, are enough to achieve efficient Building Energy Management.
In addition to the above concerns, the vast majority of the discussed approaches for Building Energy Management are adopting a central node for data collection and exploitation. Centralized approaches may seem like a tidy and conceptually efficient manner to manage and exploit data collected from every corner of a building ecosystem, but is this always true? Taking as an example the recent trends in cloud computing concepts or the blockchain data storing concepts, imagine a common residential family building in a suburban area consisted by 10 similar apartments where each apartment has 4 living areas (2 bedrooms, 1 kitchen-living room, and 1 bathroom) equipped with an independent HVAC each, i.e., 4 x 10 = 40 rooms and 40 HVAC control setpoints in total.
Moreover, diverse occupancy schedules and weather effects (due to a different orientation and different material “life-experiences”) in each apartment suggest an individually tailored strategy to preserve living thermal comfort. To manage and control the 40 HVACs in an individually acceptable energy-efficient manner, measurements (for example 1. indoor temperature, 2. CO2 levels, 3. relative humidity, 4. energy consumption, 5. human-presence) from 40 rooms should be periodically transmitted to a central node i.e. 40 x 5 = 200 data-points in total. Based on these data-points 40 HVAC set points should be calculated/defined and eventually applied (i.e. closed-loop control strategy). It is evident that such conceptual approach may ease data handling and exploitation but may also severely increase the deployment costs/resources (e.g. wiring, data-routing, security and privacy, maintenance, computational power, storing capacity). But do we really need to go down this expensive path?
A recent advance in stochastic approximation and optimization theory, developed by ConvCAO research and development group, suggest another workaround towards achieving Building Energy Efficiency requiring much fewer resources to be consumed. The core idea is based on the Cognitive Adaptive Optimization tool (abbreviated also as CAO) where a centralized computationally efficient algorithm is utilized. CAO is a purely model-free (and therefore model-agnostic also) approach where only a single (scalar) performance measurement is periodically (e.g. at a daily basis in building management applications) required to fine-tune the parameters defining the existing control strategy. CAO is also agnostic to the adopted control strategy, it just needs the current set of the parameters (e.g. thresholds in rule-based control strategies, scalar gains in PID control strategies, gain-matrices in large-scale problems, etc.) deciding the control strategy and the respective overall performance index value. The main attribute of CAO though is the self-learning mechanism embedded which enables a rapid online smooth convergence to more efficient performance values.
In application cases where no control strategy is defined or even when the adopted control strategy needs to be extended, an extended version of CAO, namely Parameterized Cognitive Adaptive Optimization (abbreviated also as PCAO) has been developed where similarly to CAO the control strategy is periodically fine-tuned using the exact same data set without any model-assistance. The main difference between CAO and PCAO is the built-in Hamilton-Jacobi-Bellman (entropy-based) closed-loop control strategy incorporated within PCAO.
Both CAO and PCAO have already been proven quite efficient as centralized Building Optimization and Management Systems in several simulative and real-life cases. However, both approaches, as all centralized approaches do, suffer from computational and data-transmission problems when it comes to Ultra-Large-Scale System (or Systems-of-Systems cases) applications.
To this end, both CAO and PCAO have been significantly revised towards a blockchain based Building Optimization and Management architecture. Several coordinated agents, employing – in reality – instances of CAO or PCAO, form a distributed ecosystem of locally referring control fine-tuning entities where each agent is responsible to fine-tune the local control strategy (e.g. the control strategy applied in each room or in each apartment) to achieve Local comfort for Global (Local4Glocal – L4G) energy efficiency. The revised distributed optimization versions of CAO and PCAO are usually abbreviated as L4GCAO and L4GPCAO respectively. In the same building example of 10 apartments and 40 rooms in total, the workflow within L4GCAO and L4GPCAO can be revised as follows:
To manage and control the 40 HVACs in an individually acceptable manner, measurements (for example 1. indoor temperature, 2. CO2 levels, 3. relative humidity, 4. human-presence) from each one of the 40 rooms should be periodically transmitted to a locally referring agent i.e. 1 x 4 = 4 data-points in total. The total energy consumption of the building can become available by a single energy meter at the supply point of the building. This scalar measurement is required to be transmitted periodically (e.g. on a daily basis) to each locally referring agent.
As a result, by exploiting a single scalar energy measurement transmitted periodically from a central node to each local agent a global situation awareness can be achieved at a local level to fine-tune the local HVAC control strategy in a globally acceptable manner. Moreover, based on the fine-tuned local control strategy and the local measurements (for example 1. indoor temperature, 2. CO2 levels, 3. relative humidity, 4. human-presence) a quite resource efficient deployment and operational closed-loop control strategy (e.g. in terms of wiring, data-routing, security and privacy, maintenance, computational power, storing capacity) are achieved.
To move from theory to practice, L4GPCAO approach has been recently successfully presented and evaluated in a real-life building testbed (E.ON. ERC office building, located inside the Aachen University Campus, Germany) where the existing commercial control strategy was utilized as a performance benchmark. Without any pre-application tuning and tests, L4GPCAO achieved an energy consumption reduction of 34-35% while the indoor thermal conditions were very similar to the benchmark case. One of the most valuable observations during these tests is the fact that L4GPCAO was able to do so within less than a week (tests were only conducted during the last week of November 2017). Based on the tangible application observations the authors would like to highlight that:
“More does not always mean better, L4GPCAO is another paradigm where with less resources more overall performance can be achieved when a group of synergetic agents is coordinated in a sufficient manner with just a hint!”
These findings are described in the article entitled Energy-efficient HVAC management using cooperative, self-trained, control agents: A real-life German building case study, recently published in the journal Applied Energy. This work was conducted by Iakovos T.Michailidis, Panagiotis Michailidis, Christos Korkas, and Elias B. Kosmatopoulos from the Centre for Research & Technology Hellas, and Thomas Schild, Roozbeh Sangi, Johannes Fütterer, and Dirk Müller from RWTH Aachen University. The presented advances have been conducted within the scope of Local4Global project coordinated by Pf. Elias B. Kosmatopoulos (Academic Research Partner in the Information Technologies Institute under the Centre for Research and Technology Hellas), funded under the European 7th Framework Programme (FP7).