If you ever work with old hardware, like writing an NES emulator or reverse engineering an old Amiga game, you will inevitably come across planar graphics modes.
These days, with loads of memory and huge color depths, graphics are stored sequentially. It makes the most logical sense. The red, green, and blue values for each pixel are just stored in an array, in some predefined order (RGB, BGR, whatever). There may even be an alpha channel.
Video cards of yesteryear didn't have access to that much ram, so direct color, requiring 3 or 4 bytes of ram per pixel, was out of the question.
Those old video cards used a palette-based system instead. There's a palette with the red, green, and blue values of each color. Then the image data itself is just an array of indices into the palette. With 256 colors, we can use a single byte per pixel.
This is all very straight forward so far. However, even 256 colors was extravagant for some older hardware.
With 16 colors, we only need 4 bits per pixel for the indices. That means that a single byte can modify 2 pixels on the screen. This works out rather well, since the width of each video mode is always going to be an even number. With 16 colors, images take up exactly half the amount of space as if we had 256 colors. The only thing you'd really need to worry about is whether the low nibble (4 bits) is drawn first, or the high nibble? That of course would be up to the hardware to decide. Regardless, the sequential palette based system still works.
With 32 colors, things get difficult. We technically only need 5 bits per pixel for the indices, but you cannot really cut up an array of bytes into 5-bit chunks easily. You could pad it, so that each byte contains the index, but now you're wasting 3 bits per pixel. That's a lot of space to be wasting.
You could try to cram it all together, so the first 5 bits of the first byte represent the first pixel, and the top 3 bits represent the bottom 3 bits of the second pixel, and the bottom 2 bits of the second byte represent the top 2 bits of the second pixel, and so on. This is a disaster. Addressing individual pixels is now a total nightmare.
So our options are waste space, or make addressing suck.
This is where planar graphics come in to save the day. Instead of trying to pack all the bits of the indices together, we spread them out across planes.
A plane is a 1 bit per pixel array. So any byte of a plane controls 8 pixels. For 32-colors, we need 5 bits per pixel, so we need 5 planes.
The first plane contains the lowest bits of all pixels. The second plane contains the second lowest bits of all pixels, and so on. So to modify the 3rd pixel on the screen, you would modify bit 3 of the first byte of every plane.
It's a bit more complicated than sequential storage, but it's still easy to address a specific pixel (byte = position / 8, bit = position & 7), and it saves a ton of RAM. Imagine a 320x200 screen with 32 colors. If we had padded each pixel out, it would require 64,000 bytes. With planes, we only need (320 * 200 / 8) bytes per plane, with 5 planes, comes to just 40,000 bytes.
While planes were super useful for odd bit-length palette sizes like 32 colors, they were also used for expansion reasons. Let's say we made a system with 16 colors, but intended to upgrade the system later on to 32 colors.
If we made the 16-color system use planar graphics instead of storing the indices in the low and high nibble of each byte, then going to 32-colors only requires adding a new plane somewhere in RAM. The older 16-color games would still work, they just wouldn't ever reference the 16 newer colors in the palette (assuming the new plane was initialized to 0 by the system).
With systems like the Amiga, software could even specify how many planes it wanted to use, and the hardware addressing doesn't even have to change to accommodate it.