Webbläsaren som du använder stöds inte av denna webbplats. Alla versioner av Internet Explorer stöds inte längre, av oss eller Microsoft (läs mer här: * https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Var god och använd en modern webbläsare för att ta del av denna webbplats, som t.ex. nyaste versioner av Edge, Chrome, Firefox eller Safari osv.

Active and Physics-Based Human Pose Reconstruction

Författare

Summary, in English

Perceiving humans is an important and complex problem within computer
vision. Its significance is derived from its numerous applications, such
as human-robot interaction, virtual reality, markerless motion capture,
and human tracking for autonomous driving. The difficulty lies in the
variability in human appearance, physique, and plausible body poses. In
real-world scenes, this is further exacerbated by difficult lighting
conditions, partial occlusions, and the depth ambiguity stemming from
the loss of information during the 3d to 2d projection. Despite these
challenges, significant progress has been made in recent years,
primarily due to the expressive power of deep neural networks trained on
large datasets. However, creating large-scale datasets with 3d
annotations is expensive, and capturing the vast diversity of the real
world is demanding. Traditionally, 3d ground truth is captured using
motion capture laboratories that require large investments. Furthermore,
many laboratories cannot easily accommodate athletic and dynamic
motions. This thesis studies three approaches to improving visual
perception, with emphasis on human pose estimation, that can complement
improvements to the underlying predictor or training data.

The first two papers present active human pose estimation, where a
reinforcement learning agent is tasked with selecting informative
viewpoints to reconstruct subjects efficiently. The papers discard the
common assumption that the input is given and instead allow the agent to
move to observe subjects from desirable viewpoints, e.g., those which
avoid occlusions and for which the underlying pose estimator has a low
prediction error.

The third paper introduces the task of embodied visual active learning,
which goes further and assumes that the perceptual model is not
pre-trained. Instead, the agent is tasked with exploring its environment
and requesting annotations to refine its visual model. Learning to
explore novel scenarios and efficiently request annotation for new data
is a step towards life-long learning, where models can evolve beyond
what they learned during the initial training phase. We study the
problem for segmentation, though the idea is applicable to other
perception tasks.

Lastly, the final two papers propose improving human pose estimation by
integrating physical constraints. These regularize the reconstructed
motions to be physically plausible and serve as a complement to current
kinematic approaches. Whether a motion has been observed in the training
data or not, the predictions should obey the laws of physics. Through
integration with a physical simulator, we demonstrate that we can reduce
reconstruction artifacts and enforce, e.g., contact constraints.

Avdelning/ar

Publiceringsår

2023

Språk

Engelska

Publikation/Tidskrift/Serie

Dissertation

Issue

70

Dokumenttyp

Doktorsavhandling

Förlag

Department of Computer Science, Lund University

Ämne

  • Computer Vision and Robotics (Autonomous Systems)

Nyckelord

  • computer vision
  • human pose estimation
  • reinforcement learning
  • physics-based human pose estimation
  • active learning

Status

Published

Projekt

  • Deep Learning for Understanding Humans

ISBN/ISSN/Övrigt

  • ISSN: 1404-1219
  • ISBN: 978-91-8039-472-7
  • ISBN: 978-91-8039-471-0

Försvarsdatum

13 januari 2023

Försvarstid

10:15

Försvarsplats

Lecture Hall MH:Hörmander, Centre for Mathematical Sciences, Sölvegatan 18, Faculty of Engineering LTH, Lund University, Lund. The dissertation will be live streamed, but part of the premises is to be excludes from the live stream.

Opponent

  • Fahad Khan (Docent)