Show HN:具備時間一致性的影片 DeepDream

Show HN:具備時間一致性的影片 DeepDream

Hacker News·

這篇 Hacker News 的「Show HN」文章介紹了一個用於影片處理的 PyTorch DeepDream 實作,該實作結合了 RAFT 光流估計和遮蔽遮罩技術,以實現時間一致性並防止鬼影現象。

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

To see all available qualifiers, see our documentation.

DeepDream for video with temporal consistency. Features RAFT optical flow estimation and occlusion masking to prevent ghosting. A PyTorch implementation.

License

Uh oh!

There was an error while loading. Please reload this page.

jeremicna/deepdream-video-pytorch

Folders and files

Latest commit

History

Repository files navigation

deepdream-video-pytorch

This is a fork of neural-dream, a PyTorch implementation of DeepDream. This fork introduces optical flow estimation and occlusion masking to apply DeepDream to videos with temporal consistency.

Features

Demo

With temporal consistency

With frames processed independently

Inputs

Setup

Dependencies

This project requires the following key packages:

Install Dependencies:

Download Models:
Run the download script to fetch the standard Inception/GoogLeNet models:

To download all compatible models:

Usage

1. Video DeepDream

To dream on a video, use the video_dream.py script. This wrapper accepts specific video arguments and any argument accepted by the standard image dreamer (e.g., layers, octaves, iterations).

Basic Video Command:

Note: For video processing, we recommend using -num_iterations 1. The temporal consistency from optical flow means each frame builds on the previous dream, so fewer iterations per frame are needed compared to single images.

Video-Specific Arguments:

2. Standard DeepDream Arguments

All of the following arguments are from the single frame implementation, and you can mix and match any of these with the video-specific arguments above. Refer to neural-dream for more information on single frame parameters.

Example combining video and standard args:

For single image processing only:

Note: Paths to images should not contain the ~ character; use relative or absolute paths.

Frequently Asked Questions

Problem: The program runs out of memory (OOM)
Solution:

Problem: Video processing is very slow
Solution:
Video DeepDreaming is computationally expensive. It runs the full DeepDream process per frame, plus Optical Flow calculations.

Memory Usage

By default, neural-dream uses the nn backend.

With default settings, standard execution uses ~1.3 GB GPU memory.

Multi-GPU Scaling

You can use multiple devices with -gpu and -multidevice_strategy.
Example: -gpu 0,1,2,3 -multidevice_strategy 3,6,12 splits layers across 4 GPUs. See ProGamerGov/neural-dream for details.

About

DeepDream for video with temporal consistency. Features RAFT optical flow estimation and occlusion masking to prevent ghosting. A PyTorch implementation.

Topics

Resources

License

Uh oh!

There was an error while loading. Please reload this page.

Stars

Watchers

Forks

Releases

Packages

  0

Uh oh!

There was an error while loading. Please reload this page.

Contributors

  3

Image

Image

Image

Languages

Footer

Footer navigation

Hacker News

相關文章

  1. Show HN:LeetDreamer - AI 生成的 LeetCode 解題影片

    3 個月前

  2. Show HN:用於高轉換率社群媒體影片的開源 AI 工作流程

    3 個月前

  3. Show HN:我開發了一個能製作具備角色一致性的15分鐘AI影片的工具

    3 個月前

  4. Show HN:FastScheduler – 支援異步的裝飾器優先 Python 工作排程器

    3 個月前

  5. PyTorch 的視覺化入門

    2 個月前