• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

高時空間分解能フェムト秒3次元ホログラフィック動画イメージング

Research Project

Project/Area Number 23KF0011
Research Category

Grant-in-Aid for JSPS Fellows

Allocation TypeMulti-year Fund
Section外国
Review Section Basic Section 30020:Optical engineering and photon science-related
Research InstitutionChiba University

Principal Investigator

角江 崇  千葉大学, 大学院工学研究院, 准教授 (40634580)

Co-Investigator(Kenkyū-buntansha) BLINDER DAVID  千葉大学, 大学院工学研究院, 外国人特別研究員
Project Period (FY) 2023-04-25 – 2025-03-31
Project Status Granted (Fiscal Year 2023)
Budget Amount *help
¥2,000,000 (Direct Cost: ¥2,000,000)
Fiscal Year 2024: ¥1,000,000 (Direct Cost: ¥1,000,000)
Fiscal Year 2023: ¥1,000,000 (Direct Cost: ¥1,000,000)
KeywordsHolography / Ultrafast imaging / Light-in flight / Diffraction algorithms / Coded aperture
Outline of Research at the Start

LIF holography is a simultaneous 3D and ultrafast imaging technology, but it has significant spatiotemporal trade-offs and distortions. We aim to address these limitations by creating a novel computational LIF holographic setup to achieve unprecedented spatiotemporal resolution and accuracy.

Outline of Annual Research Achievements

We have designed multiple novel diffraction algorithms for efficient light propagation, including the time component. This includes spatiotemporal point-spread functions, temporal Fresnel diffraction, Gabor-based diffraction, and polygonal CGH algorithms.
Their principles are validated with experimentally acquired holograms (WP2). We presented the work on temporal diffraction algorithms at an invited talk for OPIC2024, and have submitted multiple journal papers on these works.
Furthermore, we have developed a system for acquiring a high-speed holographic video with multiple acquisitions. Unlike LIF holography, This method is unsuitable for real-time acquisitions but could acquire holograms at higher spatiotemporal resolution. This will also serve as a ground truth reference for the targeted signal and a calibration tool for, e.g., coded apertures. (WP1)

We have also prepared a design for a spatiotemporal-coded aperture, serving as the main component for the enhanced LIF holography with higher spatiotemporal resolution. Contrary to conventional coded holographic aperture, this device should rapidly modify the wavefront over time, enabling the distinction of dynamic features. We are now in the process of having it manufactured. (WP1)

Current Status of Research Progress
Current Status of Research Progress

1: Research has progressed more than it was originally planned.

Reason

Most of the work pertaining to diffraction algorithm modeling is finished, and several manuscripts are in preparation. The work on temporal diffraction algorithms was presented at an invited talk for OPIC2024, for which a publication to Optics Express is being prepared.
We are submitting our work on Gabor propagation to the Optica journal, and research on polygonal CGH with bump mapping was submitted to Optics letters.

We still need to build the new experimental LIF holographic setup, which we aim to achieve in July. This is necessary to validate the newly proposed phaso-temporal holographic video acquisition method to acquire high-resolution in space and time, serving as a ground truth reference and calibration tool. This method would lead to a separate publication and would be an important step for the enhanced LIF acquisition technique.

We are now in the process of manufacturing the spatiotemporal coded aperture, which could code a holographic signal simultaneously in space and time. This will be done by a joint collaboration between Chiba University and the Vrije Universiteit Brussel. This key component will, in principle, allow for the resolution enhancement goal we seek for this project, leading to a journal publication.

Strategy for Future Research Activity

The primary goal is now to construct the novel experimental LIF holographic setup in order to have experimental validation of the phaso-temporal holographic video acquisition method and later extend it to include the coded aperture. We should have all the necessary components by now (except for the coded aperture; see below), and we expect it to be realized in the next few months.

The second goal is to create the coded aperture and the test samples for our setup. The goal is to sandwich a micrometer-thin coded random phase plate between two beamsplitter-coated plates; this will be jointly achieved by manufacturing coated beamsplitter plates in Japan and using the 2-photon polymerization-based 3D printer at B-PHOT at Brussels University.

Finally, I plan to continue refining the temporal diffraction algorithms, accounting for coherence non-idealities such as multiple wavelengths, and to work on the inverse imaging algorithms converting the coded hologram data into an ultrafast holographic video.

Report

(1 results)
  • 2023 Research-status Report
  • Research Products

    (3 results)

All 2024 2023

All Presentation (3 results) (of which Int'l Joint Research: 2 results,  Invited: 2 results)

  • [Presentation] Numerical models for ultrafast diffraction in light-in-flight holography2024

    • Author(s)
      David Blinder and Takashi Kakue
    • Organizer
      OPTICS & PHOTONICS International Congress 2024
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research / Invited
  • [Presentation] Joint color optimization for holographic displays2023

    • Author(s)
      David Blinder, Fan Wang, Peter Schelkens, Takashi Kakue, Tomoyoshi Shimobaba
    • Organizer
      Optica Imaging and Applied Optics Congress 2023
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] Computer-generated holography for 3D lines and curves2023

    • Author(s)
      David Blinder, Takashi Nishitsuji, Peter Schelkens, and Takashi Kakue
    • Organizer
      2023年第2回ホログラフィック・ディスプレイ研究会
    • Related Report
      2023 Research-status Report
    • Invited

URL: 

Published: 2023-04-26   Modified: 2024-12-25  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi