ffglitch pixel formats

This weekend I was at the Poetic Computation Group Belgium and Greg Berger mentioned doing glitch art with FFmpeg.

Being a former FFmpeg developer myself, I remembered having made quite a bunch of experimentation with it, but it was mostly ephemeral and just for fun. Then I thought to myself: what about turning those bits and pieces of experimentation into blog posts and projects on GitHub?

And so the first project is born into my very new ffglitch repository. The name of the repository suggests more FFmpeg glitch art projects will come, and I hope my laziness doesn’t stop me from doing this (which it likely will).

pix_fmt

https://github.com/ramiropolla/ffglitch/tree/master/pix_fmt

The pix_fmt project consists of doing a whole bunch of incorrect pixel format conversions. If you don’t know what pixel formats are, read up on my previous blog post pixel formats 101.

The project is a script that generates a bunch of combinations for pixel format conversion. For example,  one such conversion takes raw RGB data as input, pretends that data is YUV420p, and reconverts it to RGB. The project does this for all possible input to output pixel format combinations. This amounts to nearly 10000 images, with about 4000 being unique.

To try it out, just do:

$ git clone https://github.com/ramiropolla/ffglitch.git
$ cd pix_fmt
$ python pix_fmts.py <input.png> > makefile
$ make -r -k -jN
$ cd output_dir
$ fdupes --delete --noprompt .

What are the results like? Well, here are some samples (originals first):

libcaca logo:

libcaca-logoyuvj444p_yuv420p12le yuva422p9le_yuv420p16be yuva422p9be_yuv444p14le yuv444p16be_yuv420p14be yuv444p12le_yuv444p16be xyz12be_yuv420p10le  gbrp_yuvj440p gbrp9be_yuv422p10le gbrp9be_bgr48le gbrp12le_yuv420p10le

Tarrafa hackerspace logo with yellow background:

argb_argbyuva444p_yuv422p12le yuv444p16le_yuv422p14le yuv444p16le_yuv422p12le gbrp10be_xyz12le bgr48be_yuv444p14be

Tarrafa hackerspace logo with white background:

argb_argb bgr24_yuv444p.png

Check it out, make your own ffglitch.pix_fmt and post the link in the comments!

Have fun…

pixel formats 101

How is an image represented in the computer’s memory?

There are a billion different file formats, codecs, and pixel formats that can be used for storing images. Think BMP, PNG, WEBP, BPG, GIF (pronounced JIF), lossy, lossless, whatever…

But at some point, the image is read from disk, demuxed, decompressed, and then we have a bunch of data in the computer’s memory. It is raw data. Just pixels. What is that raw representation of pixels like?

You’ve probably heard of RGB. The simplest answer could be:

First there’s a red pixel, then a green pixel, then a blue pixel, and so on and so on…

rgbrgbrgb

Great. Kind of. In that case, each pixel would be spread out through its three components, so we would have

one pixel, then another pixel, then another pixel, and so on and so on…

and for each of those pixels, we would have

one red component, then one green component, then one blue component.

pixel_rgbrgbrgb
pixel_rgbrgbrgb

But images are bidimensional. They are made up of many lines stacked one on top of the other. The computer’s memory is just one long line. Should we have one RAM stick for each line of our image?

Of course not, we just put the lines one next to the other. So now we have:

Pixel 1 from line 1, pixel 2 from line 1, …, pixel n from line 1, pixel 1 from line 2, pixel 2 from line 2, …, pixel n from line 2, …, …, …, …, pixel 1 from line m, pixel 2 from line m, …, pixel n from line m.

where n is the width of one line and m is the height of the image.

line_pixel_rgbrgbrgb
line_pixel_rgbrgbrgb

If each component for each pixel is 1 byte, then each pixel is 3 bytes, each line is n * 3 bytes, and the entire image is m * n * 3 bytes.

Now let’s see another pixel format: YUV. It is very widely used for lossy video codecs and lossy image compression because it can easily deal with the fact that our eyes perceive brightness better than color. Each pixel is transformed into one component for luminance (roughly equivalent to brightness), and two funky values describing color information. We will call those components Y (luminance), U and V (chrominance). Let’s suppose they’re also 1 byte for each component.

So, for this pixel format, we just do the same as with RGB, but storing the YUV components instead, right? Like so:

one Y component, then one U component, then one V component, and so on and so on…

line_pixel_yuvyuvyuv
line_pixel_yuvyuvyuv

Sure, we could, but that’s not normally what we do. Remember that our eyes are better at perceiving luminance than chrominance? What happens if we throw away half of the information related to chrominance? Well, we still get a pretty darn good looking image. What we have now is:

one Y component, then one U component, then another Y component, then one V component, and so on and so on…

line_pixel_yuyvyuyvyuyv

Remember that the RGB image used n * m * 3 bytes? The YUV image with half the color information thrown out will take n * m bytes for Y, and n * m for both U and V combined, for a total of n * m * 2 bytes. Heck, we just cut the image size by 33%!!! and it still looks good (search on google for image comparisons, I’m too lazy to make them myself). In the image above, even though it’s smaller, we now described 4 pixels per line instead of 3.

But that’s not all the fun we can get out of YUV. Suppose you have an old black and white film (actually, what we call black and white in this case is really grayscale. It’s not only 100% black or 100% white pixels. It will encompass many shades of grey between full black and full white).

So suppose you have a film with many shades of grey. There is no color information at all in there. Then why are we wasting precious disk space or precious memory with all three Y, U, and V components? We can just throw away U and V entirely and still have the exact same output on our screens. We just cut the image size by 66% in regards to the original image!!! What we have now is:

one Y component, another Y component, another Y component, and so on and so on…

line_pixel_yyyyyyyy
line_pixel_yyyyyyyy

Now suppose you have a film that does have color, but some people watching it might be stuck with black and white TVs. Some viewers will get the colored stuff, other viewers only care about the Y. Therefore we HAVE to transmit Y, U, and V. But then, black and white TVs will have to sift through the data and select only the Y components. It will have to do:

get Y component, drop U component, get Y component, drop V component, get Y component, drop U component, get Y component, drop V component, and so on and so on…

line_pixel_yuyvyuyvyuyv_nouv
line_pixel_yuyvyuyvyuyv_nouv

If only there was a way to sort the Y, U, and V data in a way that made it simpler to select each specific type of components… Oh, wait, there is a way! It’s called planar YUV. It’s all still the same data, but the way they’re represented in memory will look like:

plane 1: one Y component, another Y component, another Y component, and so on and so on…
plane 2: one U component, another U component, another U component, and so on and so on…
plane 3: one V component, another V component, another V component, and so on and so on…

line_pixel_planar_yuv
line_pixel_planar_yuv

Now that black and white TV set can just get the Y plane, and then drop the entire U and V planes.

line_pixel_planar_yuv_nouv
line_pixel_planar_yuv_nouv

There’s a shitload more of pixel formats around. There are higher bit-depths (9, 10, 16 bits per pixel, both in little-endian and big-endian), YUV with interleaved UV, paletted formats (remember old arcade consoles?), YUV formats that drop a bunch more color information (both horizontally and vertically), different component orders for RGB (i.e. BGR)… Just look at this list created by ffmpeg -pix_fmts:

$ ffmpeg -pix_fmts
ffmpeg version N-69925-g9f6431c Copyright (c) 2000-2015 the FFmpeg developers
  built with Ubuntu clang version 3.4-1ubuntu3 (tags/RELEASE_34/final) (based on LLVM 3.4)
  configuration: --enable-libmp3lame --enable-libx264 --cc='ccache clang' --enable-gpl
  libavutil      54. 18.100 / 54. 18.100
  libavcodec     56. 22.100 / 56. 22.100
  libavformat    56. 22.100 / 56. 22.100
  libavdevice    56.  4.100 / 56.  4.100
  libavfilter     5. 11.100 /  5. 11.100
  libswscale      3.  1.101 /  3.  1.101
  libswresample   1.  1.100 /  1.  1.100
  libpostproc    53.  3.100 / 53.  3.100
Pixel formats:
I.... = Supported Input  format for conversion
.O... = Supported Output format for conversion
..H.. = Hardware accelerated format
...P. = Paletted format
....B = Bitstream format
FLAGS NAME            NB_COMPONENTS BITS_PER_PIXEL
-----
IO... yuv420p                3            12
IO... yuyv422                3            16
IO... rgb24                  3            24
IO... bgr24                  3            24
IO... yuv422p                3            16
IO... yuv444p                3            24
IO... yuv410p                3             9
IO... yuv411p                3            12
IO... gray                   1             8
IO..B monow                  1             1
IO..B monob                  1             1
I..P. pal8                   1             8
IO... yuvj420p               3            12
IO... yuvj422p               3            16
IO... yuvj444p               3            24
..H.. xvmcmc                 0             0
..H.. xvmcidct               0             0
IO... uyvy422                3            16
..... uyyvyy411              3            12
IO... bgr8                   3             8
.O..B bgr4                   3             4
IO... bgr4_byte              3             4
IO... rgb8                   3             8
.O..B rgb4                   3             4
IO... rgb4_byte              3             4
IO... nv12                   3            12
IO... nv21                   3            12
IO... argb                   4            32
IO... rgba                   4            32
IO... abgr                   4            32
IO... bgra                   4            32
IO... gray16be               1            16
IO... gray16le               1            16
IO... yuv440p                3            16
IO... yuvj440p               3            16
IO... yuva420p               4            20
..H.. vdpau_h264             0             0
..H.. vdpau_mpeg1            0             0
..H.. vdpau_mpeg2            0             0
..H.. vdpau_wmv3             0             0
..H.. vdpau_vc1              0             0
IO... rgb48be                3            48
IO... rgb48le                3            48
IO... rgb565be               3            16
IO... rgb565le               3            16
IO... rgb555be               3            15
IO... rgb555le               3            15
IO... bgr565be               3            16
IO... bgr565le               3            16
IO... bgr555be               3            15
IO... bgr555le               3            15
..H.. vaapi_moco             0             0
..H.. vaapi_idct             0             0
..H.. vaapi_vld              0             0
IO... yuv420p16le            3            24
IO... yuv420p16be            3            24
IO... yuv422p16le            3            32
IO... yuv422p16be            3            32
IO... yuv444p16le            3            48
IO... yuv444p16be            3            48
..H.. vdpau_mpeg4            0             0
..H.. dxva2_vld              0             0
IO... rgb444le               3            12
IO... rgb444be               3            12
IO... bgr444le               3            12
IO... bgr444be               3            12
I.... ya8                    2            16
IO... bgr48be                3            48
IO... bgr48le                3            48
IO... yuv420p9be             3            13
IO... yuv420p9le             3            13
IO... yuv420p10be            3            15
IO... yuv420p10le            3            15
IO... yuv422p10be            3            20
IO... yuv422p10le            3            20
IO... yuv444p9be             3            27
IO... yuv444p9le             3            27
IO... yuv444p10be            3            30
IO... yuv444p10le            3            30
IO... yuv422p9be             3            18
IO... yuv422p9le             3            18
..H.. vda_vld                0             0
IO... gbrp                   3            24
IO... gbrp9be                3            27
IO... gbrp9le                3            27
IO... gbrp10be               3            30
IO... gbrp10le               3            30
I.... gbrp16be               3            48
I.... gbrp16le               3            48
IO... yuva420p9be            4            22
IO... yuva420p9le            4            22
IO... yuva422p9be            4            27
IO... yuva422p9le            4            27
IO... yuva444p9be            4            36
IO... yuva444p9le            4            36
IO... yuva420p10be           4            25
IO... yuva420p10le           4            25
IO... yuva422p10be           4            30
IO... yuva422p10le           4            30
IO... yuva444p10be           4            40
IO... yuva444p10le           4            40
IO... yuva420p16be           4            40
IO... yuva420p16le           4            40
IO... yuva422p16be           4            48
IO... yuva422p16le           4            48
IO... yuva444p16be           4            64
IO... yuva444p16le           4            64
..H.. vdpau                  0             0
IO... xyz12le                3            36
IO... xyz12be                3            36
..... nv16                   3            16
..... nv20le                 3            20
..... nv20be                 3            20
IO... yvyu422                3            16
..H.. vda                    0             0
I.... ya16be                 2            32
I.... ya16le                 2            32
IO... rgba64be               4            64
IO... rgba64le               4            64
IO... bgra64be               4            64
IO... bgra64le               4            64
IO... 0rgb                   3            24
IO... rgb0                   3            24
IO... 0bgr                   3            24
IO... bgr0                   3            24
IO... yuva444p               4            32
IO... yuva422p               4            24
IO... yuv420p12be            3            18
IO... yuv420p12le            3            18
IO... yuv420p14be            3            21
IO... yuv420p14le            3            21
IO... yuv422p12be            3            24
IO... yuv422p12le            3            24
IO... yuv422p14be            3            28
IO... yuv422p14le            3            28
IO... yuv444p12be            3            36
IO... yuv444p12le            3            36
IO... yuv444p14be            3            42
IO... yuv444p14le            3            42
IO... gbrp12be               3            36
IO... gbrp12le               3            36
IO... gbrp14be               3            42
IO... gbrp14le               3            42
IO... gbrap                  4            32
I.... gbrap16be              4            64
I.... gbrap16le              4            64
IO... yuvj411p               3            12
I.... bayer_bggr8            3             8
I.... bayer_rggb8            3             8
I.... bayer_gbrg8            3             8
I.... bayer_grbg8            3             8
I.... bayer_bggr16le         3            16
I.... bayer_bggr16be         3            16
I.... bayer_rggb16le         3            16
I.... bayer_rggb16be         3            16
I.... bayer_gbrg16le         3            16
I.... bayer_gbrg16be         3            16
I.... bayer_grbg16le         3            16
I.... bayer_grbg16be         3            16

That’s it

That was a very very basic introduction about pixel formats. If you want to learn more about this, then you should go on and read the pixel format descriptors from the FFmpeg source code. Or else, if you’re not ready to spend a couple of years learning C and delving into the FFmpeg source code, just search on google. There is a bunch of information out there…

Have fun…