From composing music to programming software, AI has demonstrated remarkable potential across various domains, but how far are we from having robots such as those in Sci-fi films - flying into buildings, scanning their surroundings, assessing levels of risk and estimating exposure?
While it might sound like a futuristic concept, it might be closer to reality than you think.. Risk engineers will likely, in the near future, be able to use AI to support risk assessments, calculate risk exposure and draw up building blueprints by employing robots and drones.
WTW Research Network is partnering with a cross-disciplinary research team comprising of experts from the Loughborough University Computer Science AI department, and the WTW research network, who are exploring ways to leverage AI to improve the efficiency, accuracy, and usability of the data collection procedure during risk assessments. With a particular focus on fire risk assessments, the aim is to standardize and improve the calculation of pricing premiums through a more data-driven approach, this may also improve information asymmetry within the industry. By utilising AI, the process of risk assessments can become more streamlined, and the potential for errors and inconsistencies can be reduced.
This research has the potential to revolutionise the underwriting process in the insurance industry by making it more data-driven and easier for risk assessors to assess risk accurately and price their premiums accordingly.
One of the key goals of the research project is to develop an AI system that can automate the creation of custom floor plan diagrams, this AI system can be used by autonomous robots, such as drones, which can quickly navigate around a property and produce an incredibly precise floor plan diagram in a matter of minutes. This floor plan diagram can then be labelled with fire safety infrastructure, either by hand by the risk assessor or autonomously through equipping the drone with a camera and object recognition tools. The risk assessor can then use this data to accurately determine risk and recommend a course of action to mitigate the risk. This may prove essential for accurate data-driven risk assessments, where every inch of a building can be autonomously measured and have an associated level of risk calculated.
To achieve this, a combination of Robotics technology and Generative AI models are being leveraged. The Robotics system uses a technology called Simultaneous Localisation and Mapping (SLAM), which enables a robot to build 2-Dimensional and 3-Dimensional maps based on data collected from sensors such as cameras or LiDAR scanners.
SLAM is typically used to enable a robot to localise itself within an environment by building a map to understand the world around it and tracking its movement within the map, this is crucial information for computing tasks such as path planning and obstacle avoidance, this technology is already widely used in automated vacuum cleaners and self-driving cars. The output of a SLAM system includes a two-dimensional scan of all the observations that the sensor system has collected, pieced together into a map. This map forms the foundation of the floor plan diagram.
The map is more traditionally known as a two-dimensional occupancy grid map. This map consists of grid cells, where black cells indicate occupied space and white cells represent free space. Due to the precision of the sensors used in SLAM, such as LiDAR scanners, which can detect differences in millimetres, these maps can be incredibly accurate and may outperform traditional methods of floor plan drafting.
While a 2D occupancy grid map may resemble a floor plan's structure or layout, notable differences make them unusable in their current form. For instance, occupancy grid maps often contain noise and artefacts, as well as furniture and accidentally made observations. For a robot, these inaccuracies are usually acceptable - however, this may not be the case for a risk assessor.
Some examples of noise that are prevalent in SLAM that limit usage for creating floor plan diagrams :
To address these issues with the SLAM maps and transform them into a more usable base for a floor plan, an AI architecture, more specifically known as a Generative Adversarial Network (GAN), specialised for the task of image-to-image translation was trained to detect all these problems and transform these occupancy grid maps into floor plan diagrams. The training process for this includes showing the AI system thousands of examples of occupancy grid maps and thousands of examples of floor plan diagrams, the AI was then tasked with learning an adaptable algorithm to convert the occupancy grid maps into floor plan diagrams, every translation is scored based on how close the transformed occupancy grid map looks like a floor plan diagram, and over time the AI gets better and better. The endpoint of this training process is an AI model that has learnt a mapping function, in our case this mapping function includes: removing objects in the map, removing the sensor noise, correcting linear and angular offsets, and removing partial observations that are indicative of unintentional mapping while completing partial observations that indicate an incomplete but intentional section of the map.
Figure (a) is an occupancy grid map as interpreted by the drone and Figure (b) is the consequent floor plan as interpreted by the AI system.
An example of an AI Generated floor plan diagram.
To build a floor plan like this, a drone simply needs to fly into each room and capture an observation. Figure (a) is an occupancy grid map as interpreted by the drone. (SDR Site B from the Radish Dataset[1] and Figure (b) is the consequent floor plan as interpreted by the AI system, in which it has completed rooms, and removed sensor noise and furniture.