I wrote a wee utility script in Python to process video in an 'arty' way. Outputs take the form of a video, or, more commonly, a still image. With the latter, the central idea is the collapse of the video's time dimension in various ways.
The various 'modes' can be summarised thusly:
- Slitscan/Striper: process an input video into an output still image by capturing a line of pixels from each frame of the video. 'Slitscan' captures row/column 1 from frame 1, row/column 2 from frame 2, etc. 'Striper' always captures the midline row/column from each video frame.
- Tiler: process an input video into an output still image by capturing a small swatch from a video frame at regular intervals. This one can be a fun way to 'summarise' a busy video into a single image.
- Mirror: process an input video into an output video by mirroring it along its horizontal or vertical midline.
To demonstrate, consider the following images:
Mode/Sub-mode | Beach Scene | Slowly Rotating Flower |
---|---|---|
Still from input video | ||
mode: slitscan; submode: 0 | ||
mode: slitscan; submode: 1 | ||
mode: striper; submode: 0 | ||
mode: striper; submode: 1 | ||
mode: tiler; submode: 10x10 | ||
Still from output video; mode: mirror; submode: 0 | ||
Still from output video; mode: mirror; submode: 1 | ||
Still from output video; mode: mirror; submode: 2 | ||
Still from output video; mode: mirror; submode: 3 |
Since I'm using a little bit of OpenCV to do some of the processing, the easiest way to run my ArtVideo.py script is via Docker.
Paste the following command into a bash script:docker run -it --rm --name ArtVideo --mount type=bind,source="$(pwd)"/ArtVideo,target=/app -w="/app" yoanlin/opencv-python3 python3 ArtVideo.py $@
... and call that script with the --help
option to get some guidance about the script parameters.
Addendum: slightly related (and the subject of a parallel project): motion extraction.
Enjoy!
To Take This One Step Further...
Consider: it is possible to think of a video as a 3D 'block' of pixels - each frame has pixels arranged in columns/rows (X/Y), and frames are stacked in time (Z). A video is normally viewed by peeling slices in Z. But what if we peel slices/frames in X or Y? We can use a variant of the Slitscan/Striper mode, above, to do just that. Here's the flower video sliced in Y:
All Video Art In Python assets by Chris Molloy are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
¤ Copyright 1999-2024 Chris Molloy ¤ All rights reserved ¤