Cloud Large Scale Video Analysis · 2020. 4. 7. · Cloud Large Scale Video Analysis H2020-ICT-2015...

Preview:

Citation preview

Cloud Large Scale Video Analysis

H2020-ICT-2015 Cloud-LSVA

Big Data - research

Oihana Otaegui

Vicomtech-IK4

Table of Content

1

3

4

2

Cloud-LSVA in numbers

Problem to be solved

Concept & Approach

Project Planning

Cloud-LSVA in numbers

Cloud-LSVA - Cloud Large Scale Video Analysis

Co-ordinator: Vicomtech-IK4

Duration: 36M - 1.1.2016 – 31.12.2018

Research and Innovation Action

H2020-ICT16-Big Data Research

Outcome A – Big Data technologies

Web Page: http://cloud-lsva.eu/

Problem to be solved

Context: Towards autonomous driving

NEW

SENSORS

Data Video Handling - Problem

30 million

kilometers

500 people in Sri Lanka

(annotating video

manually)

Problem to be solved

• Tools that can manage the extremely large volumes of data and

provide support in the annotation task (ADAS, cartography market)

• Annotating enables valuable funtionalities

– Create large training datasets of visual samples for training models

using supervised learning to be used in vision-based detection.

– Generate ground truth scene descriptions based on objects (spatio-

temporal) and events (temporal logic actions) to evaluate the

performance of algorithms and systems that aim to detect or provide

such descriptions.

Consortium

Consortium

Objective

• Develop a software platform for

efficient and collaborative

semiautomatic labelling and

exploitation of large-scale video

data that solves existing needs

for ADAS and Digital Cartography

industries.

Objective

• Software platform for collaborative

semiautomatic labelling

• This platform will need to deal

with diverse structured and

unstructured data sourced from

different sensors. Special and

dedicated tools will be deploy on

a Cloud Platform.

Objective

• Software platform for collaborative semiautomatic labelling

• Deal with diverse structured and unstructured data on a Cloud Platform

• The platform will analyse and decompose each recorded scene, in order to detect and classify relevant objects and events for specific scenarios. The system will focus on computer vision and machine learning techniques that can facilitate the analysis of complex situation

Objective

Handle and exploit large amounts of data:

– Building new ADAS systems

– Creating scene descriptions for system validation.

Objective

- Handle and exploit large amounts of data

- Framework for sharing and combining scene analysis

results, including update capabilities for in-vehicle ADAS

systems.

Objective

- Handle and exploit large amounts of data

- Framework for sharing and combining scene analysis

results, including update capabilities for in-vehicle ADAS

systems.

- Fuse video data analysis with data from other sources

such that video annotations can integrate with and

reference across the entire data corpus.

Objective

- Handle and exploit large amounts of data

- Framework for sharing and combining scene analysis

results, including update capabilities for in-vehicle ADAS

systems.

- Fuse video data analysis with other sources and

reference across the entire data corpus.

- Support annotation tools capable of learning from

human generated relevance feedback, in the form of

corrections, verifications and specializations.

Automatic

Annotation Corrections

Objective

- Handle and exploit large amounts of data

- Framework for sharing and combining scene analysis

results, including update capabilities for in-vehicle ADAS

systems.

- Fuse video data analysis with other sources and

reference across the entire data corpus.

- Tools capable of learning from human generated

feedback

- Automate as far as possible the video annotation process

to minimise human workload and improve system

scalability and feasibility.

Concept & Approach

Starting with Big Data and creating Big

Data Technologies

Concept & Approach

Moving from Big Data to “Little

Big Data”

Concept & Approach

Closing the Loop

Conceptual Architecture

Data Fusion

Source Data & Metadata (ADAS Video data, Other sensors, …)

- Multiple Sources - Incremental Input Streams

Raw Data (Videos)

Metadata (Annotations, …)

Data flows

Results

Mobile sensors

Traffic monitoring

outside videos, radar, lidar, GPS

Car monitoring steering wheel, brakes, pedals,

speed, aceleration

Network

3rd Party (Open Datasets, models, etc.)

Conceptual Architecture

Data Fusion

Network

3rd Party (Open Datasets, models, etc.)

Source Data & Metadata (ADAS Video data, Other sensors, …)

- Multiple Sources - Incremental Input Streams

Raw Data (Videos)

Metadata (Annotations, …)

Data flows

Results

Mobile sensors

Traffic monitoring

outside videos, radar, lidar, GPS

Car monitoring steering wheel, brakes, pedals,

speed, aceleration

Large Scale Processing Large Scale Database

Evaluation

Metadata

Business Logic

Video Annotation

Data Supervised

Learning Models

Video analytics

Storage, Curation, Secure Access, …

Analysis petitions (Video footage)

Supervision (Ground truth, training sets, …)

Automatic Hypotheses (Detected objects & events)

User interaction (Load, Save, …)

Benchmarking (Performance reports, …)

Validated / Enriched metadata (Maps, ADAS info, …)

Search (Video/other for objects/events)

Data

Little Big Data (local models for metadata annotation)

Conceptual Architecture

Data Fusion

Network

3rd Party (Open Datasets, models, etc.)

Source Data & Metadata (ADAS Video data, Other sensors, …)

- Multiple Sources - Incremental Input Streams

Raw Data (Videos)

Metadata (Annotations, …)

Data flows

Results

Mobile sensors

Traffic monitoring

outside videos, radar, lidar, GPS

Car monitoring steering wheel, brakes, pedals,

speed, aceleration

Large Scale Processing Large Scale Database

Evaluation

Metadata

Business Logic

Video Annotation

Data Supervised

Learning Models

Video analytics

Storage, Curation, Secure Access, …

Analysis petitions (Video footage)

Supervision (Ground truth, training sets, …)

Automatic Hypotheses (Detected objects & events)

User interaction (Load, Save, …)

Benchmarking (Performance reports, …)

Validated / Enriched metadata (Maps, ADAS info, …)

Search (Video/other for objects/events)

Data

Little Big Data (local models for metadata annotation)

CAR Deployable ADAS/object recognition models

Cycle Approach

Cycle 1 – Prototype

Alpha

Cycle 2 – Prototype

Beta

Cycle 3 – Prototype Gamma

M9-M12: Deploy scene recording SW and HW into real

vehicles and test the creation, format and upload of

content from vehicles to the established cloud network.

Preliminary analysis and annotation capabilities

M21-M24 New developments will exist on the cloud, in the

form of annotation tools, training techniques and deployment

of vision-based ADAS and map updating methods. Evaluate

both the ability of the system to handle increasing volumes of

collected data and evaluate the increased performance and

added functionalities developed during the cycle.

M33-M36 Final tests: the final deployed ADAS and map update

techniques available for the test vehicles. To evaluate performance

of the cloud infrastructure for increased growth of real data collected

from the test vehicles, both in terms of storage and processing.

Cycle Approach

M12 M36

1st Annotation Workshop

TESTFEST

M18

Integration &

Validation

Alpha Prototype

Integration &

Validation

Beta Prototype

2nd Annotation Workshop

TESTFEST

Integration &

Validation

Gamma Prototype

M24

3rd Annotation Workshop

TESTFEST

Today

THANK YOU!

Oihana Otaegui

Vicomtech-IK4

ootaegui@vicomtech.org

Recommended