Extracting visual information from in situ images forms the basis of modern-day benthic and pelagic ecosystem research. With rapid advance in optical and imaging technologies and increasing affordability, large volumes of high-resolution imagery is now routinely collected, increasingly from autonomous systems. This increase in data-intensity has created the need to significantly improve transmissions to allow for efficient information extraction and instrumentation response times.

Theme 2 will develop intelligent data processing methods to enable near real-time, remote monitoring of targets including particulate matter, phytoplankton, zooplankton, microplastics, coral, seagrass, macroalgae, invertebrates, fish, and litter. It will address key data processing, power consumption and communication bottlenecks through the application of artificial intelligence, machine learning, software improvements and optimised data extraction.

Work package 4

Imaging and optics to TRL5, (Requirement to Benchtop)", "introduction": "This WP will identify and upgrade existing imaging sensor hardware and develop systems to allow the project to generate crucial datasets for training artificial intelligence / machine learning (AI/ML) algorithms, and validating the functionality of these in relation to TechOceanS imaging operations.

Lead:

  • Dr Sari Giering
  • NOC
  • s.giering@noc.ac.uk

Work package 9

Building upon the hardware and systems developed in WP4, this WP will develop real-time, embedded deep-learning essential ocean variable feature extractors and sensor specific datasets for training AI/ML to the specific tasks required for the types of ocean research missions TechOceanS aims to support.

Lead:

  • Dr Blair Thornton,
  • UoS
  • Thornton@soton.ac.uk

Work package 14

This WP will demonstrate the operational capabilities and in situ remote awareness of benthic and pelagic essential ocean variables (EOVs). The units developed in this WP will be capable of trial deployments in real-world environments and will validate their capacity to provide global EOV coverage with decreased size and power consumption​ and​ robust, calibrated end-to-end workflows ​that allow integration across a range of autonomous vehicles.

Lead:

  • Dr Xiangyu Weng
  • GEOMAR
  • xweng@geomar.de