Fog Creek Software
Discussion Board




Understanding Pixels and Dots

It dawned on me the other day that I don't really know what a pixel is. (I'm not usually interested in graphics unless they have the letters p-o-r-n-o in front of them.)

I always thought that it was the most elemental square area in a display. But I was changing my desktop setting the other day from 96 dpi to 120 dpi. I started to wonder, is the dot in dpi == 1 pixel? If so why don't they call it ppi? If not, then what the hell's a dot?

Also, I occasionally make my own cheese-whiz icons using the VS icon editor.  I know I'm not a good artist, but the professional icons always seem less blocky, with smoother lines than mine.  It's as though they're able to edit / render at a sub-pixel level. Is this possible or am I just that bad of a pixel artist?

DPI Schtick
Tuesday, December 30, 2003

A dot is the smallest pixel a printer can make.
A pixel is the smallest dot a screen can make.

When you changed your settings from 96dpi to 120dpi, the only thing you changed was the height that your fonts display at.

And no, it's not possible to make sub-pixel icons, and yes, you are that bad of a pixel artist.  The VS icon editor is pretty horrible.

Alyosha`
Tuesday, December 30, 2003

The professional icons that make yours look bad are probably anti-aliased.  Which, despite Joel's complaints in the text department, does make icons look pretty.  If you have some other graphics software, try creating your icons there and then resizing them down to 32x32.

Sam Livingston-Gray
Tuesday, December 30, 2003

Definitions:
Number of dots = DPI * Size
1 Inch = 72 points.

Thusly, if your monitor is set to 96 dpi, a 72 point font will take up 96 pixels.  If the monitor is set to 120 dpi, a 72 point font will take up 120 pixels.

For a monitor, DPI = PPI.  For a printer DPI != PPI unless you are talking about monochrome.  For example, a 1200 dpi printer may require a 4x4 square to do a given pixel, so it'll only be 300 ppi (actually, the proper term is lpi) but 1200 dpi.

The terminology gets confusing from there.  Points aren't always nicely defined, either.  It's all part of the jargon of the graphics trade, which programmers understand part of, graphic artists understand another part of, and random people here and there actually understand portions of both, but generally never end up implementing any of this (they just get annoyed at the programmers who implemented it wrong)

You can edit at the sub-pixel level *sorta*.  With antialiasing you can make some things smoother by saying that 50% of a pixel is covered by the line, therefore drawing it in 50% gray.  If you know that you have R, G, and then B in a row on your screen in rectangular pixels, you can use the B from one pixel and the R and G from the neighboring pixel to position it slightly off-center with respect to the pixels.

The problem is both of these techniques are approximations and cannot be used to replace "real" resolution.  Icons can't have their outer edges antialiased and look good against an arbitrary background color unless you have Alpha blending (which you generally don't)

If you want smoother icons, either draw them in Illustrator (or a similar vector graphics program) and render them to antialiased bitmaps (although you often need to hand-tweak the resulting image because of the afformentioned general lack of alpha blending) or draw them in Photoshop at an integer 2-4x multiplied resolution and then scale them down and, again, hand-tweak to work around aliasing.

For my next rant on how nobody who codes GUIs really understands how fonts, pixels, and paper *really* work, I'll blather on for hours on swash letters, multiple-master fonts, the difference between Italic and Oblique, why the Bold/Italic/Underlined/etc. checkboxes of Office are fundamentally wrong which makes Multiple Master Fonts, Condensed, Expanded, Expert, and other interesting type goodies not work, and how antialiasing isn't the panacea that it was alleged to be in an ancient copy of WiReD magazine.

Flamebait Sr.
Tuesday, December 30, 2003

Flamebait, let me guess - your turntable rests on an oil bath and has a moon rock needle?

That is to say, understood about the myriad details of professional layout, but does it matter to the Average Joe?

Philo

Philo
Wednesday, December 31, 2003

Ahah, Philo, you took my flamebait.  Congrats, you have been trolled. ;)

I'm not sure what percentage of the way *I* would like things would matter to the average joe.  I know it would make the life of your average font designer easier, however.

The main gripe is that CSS got font weight specifications a tad more correct in replacing the notion of bold with a weight integer, where 400 is normal and 700 is bold.  Right now, if you have the full Helvetica set, you can get light (which would a 200 or so) and a heavy (which is bolder than bold, so an 800 or 900)

The problem is that they also got it wrong, because the full Helvetica set of fonts gives you condensed and expanded versions of the font.  Fonts like Garamond Pro give you a semibold set, plus things like swashes and old-style figures and ligatures.

So, for pretty much any graphical designer, they end up with a near-unmanagable font menu, for starters, and a variety of other annoyances over time.  Fonts are actually there for two main reasons: First, they are important because they give people something to play with when they aren't writing, and second they are there so that a designer can make a gracious and readable creation. 

OpenType solves some of these problems, most specifically all of the alternative character forms, except that whoever developed the software needs to have spent the time on supporting it, which may or may not have happened.  And 99% of software, including Office, hasn't spent the time on it.

The problem is that text is how you interface with the computer, so having your text be as easily readable as possible is important.  It's not something that would pop-out at you unless you have too much free time ("Oh, wow, look at those swash caps!" he said, examining the newspaper with a microscope), it's more something that just ends up being easier on the eyes.  Kind of how you notice that video shot on a cheap camcorder with ambient lighting doesn't look as good as a real production with real cameras and real lighting.  Your average person is not going to know exactly what's different, but they will notice that the real thing just looks better.  It's not in the realm of oil-bath turntables or (my personal favorite) CD players with flywheels and counterweights for less jitter.

Antialiasing is easier to argue about.  The trick to making a typeface readable on a screen is to hint it properly so that the curves are a little more square and all of the lines line up to pixel borders.  Naive antialiasing will just look blurry and crappy.  Bad ClearType-like subpixel positioning looks like an Apple II+ in high resolution mode, if anybody besides me remembers what that looks like.

Flamebait Sr.
Wednesday, December 31, 2003

There's a detailed article here about how to develop professional-strength Windows XP-style icons:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnwxp/html/winxpicons.asp

Also, there are several vendors that sell very nice packages of XP-style icons, and/or can develop custom icons for you:

http://www.visualpharm.com/icons.html
http://www.glyfx.com
http://www.glyfz.com
http://www.glyphlab.com
http://www.foood.net/icons/index.htm

Robert Jacobson
Wednesday, December 31, 2003

*  Recent Topics

*  Fog Creek Home