Skip to: Navigation | Content | Sidebar | Footer


Full Archives

Work and Play

December 31
You may notice the now–default white typeface. This site was originally designed on an LCD screen with a rather light gamma, and even I had trouble reading the dark text on a CRT. I reasoned that the type switcher solved this problem, but for some reason the cookies that are supposed to make a chosen style stick just don't work for me. So, path of least resistance, I’m making the choice to improve contrast and legibility. After all, what good is the text if you can’t read it? Also of note — I’ve begun a list of books I read in 2002, along with my thoughts on them. Starting at the end, I’ve made it back to September so far. Take a look if this sort of thing floats your boat. I’ve made an effort to disclose as little as possible about the story and nothing about the endings, since I’m the type of person who strongly dislikes knowing too much of a story before experiencing it. That being said, there shouldn’t be anything more in my reviews than you’d find in the dust cover summaries. A new year starts tomorrow. Make it count.

Permalink › | no comments

InDesign and Type

December 30
I have been playing with Adobe's InDesign lately, and I'm really enjoying this program. Quark Express has always been the software of choice for print design, so Adobe's entry into the market is widely under-used. I never got into Quark, mainly due to not having it, so I've been free to experiment without the shackles of familiarity confining me and I like what I see. InDesign reportedly supports OpenType, the benefits of which I'm only beginning to understand. TrueType and Type 1 fonts have traditionally been limited to one character set—this set could, of course, be as large as was necessary, and generally multiple character sets were incorporated into individual fonts regardless, but accessing anything beyond your basic keyboard characters is always such a pain. I've written about this previously. OpenType allows for multiple variations of each individual character, various ligatures, and probably a whole lot more. Developed jointly by Adobe and Microsoft, it's supposedly cross-platform and multi-national. It looks to be everything that TrueType is not. (More information from the proverbial horse's mouth.) InDesign has a built-in Glyph palette. Still not the ideal solution, but a step in the right direction, it functions similar to Windows' Character Map and allows you to choose your character for insertion to your body copy. You can select alternates for each character, if they've been defined in the font. As well, paragraph styles may be defined that allow ligatures to be automatically inserted. I've yet to truly explore the ways in which InDesign is making life easier for those working with large character sets, but what I've seen so far has been enough to impress me.

Permalink › | no comments

The Case for Copyright

December 16

As reported today by Wired News, Creative Commons has released the first version of their licensing project. The astute reader may have assumed due to the Eldred vs. Ashcroft support on the right-hand column of this very web site that I don’t support excessive copyright. The following should confirm that assumption.

Creative Commons is offering a set of canned licenses for creative work, with an emphasis on allowing shared work while retaining copyright. Thus, a writer who has spent time putting together an article on Fair Trade Coffee is able to publish it to the web and allow individuals to save a copy, while licensing it for usage in a magazine or other publication to make a profit.

There’s no paradox here; it’s all a perfectly legitimate application of international copyright law. In fact, I’d go so far as to state it’s more legitimate than large business funding copyright law extensions ad infinitum to hold on to their so-called intellectual property. (alluded example: see white “(e)” link to the right)

The great thing about Creative Commons’ effort is that the work has already been done — they have written many individual licenses and give you the option to mix and match. For example, you may choose to hold on to commercial rights while allowing individuals to create derivative work. Or you might decide to release your work to the public domain. The supporting licenses are painstakingly detailed in the current fashion of legalese, but explained in plain English in terms that can’t be mistaken.

While the world becomes increasingly hostile to even parody and fair use, it’s refreshing to think that common sense options exist to the all-or-nothing approach of big business. I support the cause, and hereby license this site and all works contained therein under the Attribution-NonCommercial-ShareAlike 1.0 license to show it.

Permalink › | no comments

a Pocket full of Rye

December 13
In a complete surprise move to even myself, I threw together a secondary publication this week that I’m soft launching. a Pocket full of Rye is a new weblog that I hope to publish monthly. I would like to say my schedule will be more frequent than that, but I know myself: I’m being realistic when saying it’s monthly. Inspired by the greats like Fireland and Textism, it will be a collection of short stories, essays and writings on more or less whatever I feel like at the time. I aim to collect the writing and self–publish at some point far off in the future, but that’s long long long term planning, so don’t get any hopes up. There will be more work on it over the holidays, and I’ll find some way of linking the latest version here. For now, go and take a look at the first edition. Feedback is welcome.

Permalink › | no comments

Effects of CSS-Based Design

December 12
When one decides to scrap the traditional hacks and kludges in creating an HTML page, and opts to instead format in CSS, the effects can be rather dramatic. A certain Mr. Z. linked an interesting tool last night, the GetContentSize analysis utility. Stripping all extraneous CSS and Javascript, it will report the signal to noise ratio of your HTML. mezzoblue, for example, sits at about 45% (which will inevitably change as new articles get posted and old ones removed), while Zeldman is a higher 60%. Both are CSS–only layouts that don’t use table hacks. Let’s look at a few examples of old–school HTML tabling to compare. As tipped off in September, Microsoft redesigned using a plethora of FONT tags and insanely poor code which hasn’t been changed in the least as of yet. The result is a lofty 8.63% content level which should surprise no one. Seemingly for the benefit of a small handful of 3.x browser users (who view the web as a great big broken mess anyway), presentation code and content are carelessly intertwined, when graceful alternatives exist that were, in fact, developed in part by Microsoft themselves. Not many of the other big players fare better. Amazon? 11.32%. Yahoo? 15.93%. AOL? 8.0%. IBM? 7.24%. The question begs itself: so what? Why do these numbers matter? On their own, of course, they mean nothing. But they’re telling of an underlying difference in the way these sites are coded. CSS–based design is intended to work on any browser that supports valid W3C guidelines. When done right, an external file (or two, or more) control the appearance of the entire site. This means that each new page load happens quicker since the display code is cached. A redesign can potentially be done without touching any of the content files which makes for faster turn–around time. The only possible disadvantage is that anything 4.x and back chokes on most CSS beyond simple text formatting. Which means roughly 6 or 7 percent of the market. And that’s the key, that small niche. These sites willingly increase load times, add cost to development time, and use 5 or 6 year old standards to deliver today’s web to people who can’t or won’t upgrade to newer technology. That’s why we’re seeing ten percent content in their HTML, because someone in marketing has decided that they can’t risk losing that share. This is an issue that won’t die soon. Anyone in the development business who understands it willl advocate the use of CSS. In fact, most personal sites and web logs are going this way, and those using the older browsers have been seeing upgrade messages for a while now when they happen to hit these sites. This will only continue, as more and more business and content-based sites come around. Wired was the first major player to move in this direction with its recent redesign. Wired is in an excellent position to do this, since their focus is on the cutting edge. It’s a step in the right direction, and now other developers have a mainstream site to reference when making proposals. The revolution has been slow to kick in, but it has started, it’s happening, and it will continue.

Permalink › | no comments

Origins of Verdana/Tahoma

December 5
I’ve noticed the similarities between the Tahoma and Verdana typefaces all along. Despite knowing it’s usually a bad practice, I have never felt remorse in using both on a single piece of work. Verdana is the wide font, Tahoma is the tight font. That's how I distinguished them in my mind. It’s never a good idea to bump Tahoma’s tracking to something lower than the default, I’ve found through trial and error. Verdana has a bit of room to maneuver; it has always just felt wider. I’ve long known the history of Verdana — commissioned by Microsoft, designed by Matthew Carter (formerly of BitStream) specifically for on–screen use, it is perhaps the most web–friendly font there is due to its strong design and high penetration of the market. Tahoma was a different matter. I knew it was commonly distributed with Microsoft Office, so I assumed it was another Vincent Connare creation. Probably due to lack of trying, I never found a good answer to my curiosity. Fast forward to today. I got curious enough to load up Photoshop and compare Verdana to Tahoma, side by side. The results? They’re almost the exact same font, distinguished just about solely by their hinting! The results didn’t surprise me in the least; the fact that this has gone completely unnoticed for so long by myself did. In the image below, the two are presented with exactly the same options set: 30px text, a tracking value of 10, and anti–aliasing set to strong. Notice the individual character shapes, compare the stems and bowls, and it becomes pretty obvious the letter-spacing is the only differentiating feature between the two. Verdana is, in fact, slightly wider than Tahoma as well. I initially assumed this was solely due to the hinting, but upon closer inspection the individual letter–forms are a bit wider.
Tahoma/Verdana examples
A quick Google search confirmed what my eyes were telling me — Verdana was created by modifying Tahoma. This may not be news to any of you, but it sure is to me.

Permalink › | 1 comments

The power of “Transform Each”

December 3
It seems to me that Adobe Illustrator is one of those programs that everyone knows a little bit about, but not many consider themselves experts on. Often eclipsed by its big brother Photoshop, it packs in a heck of a lot of great features that you don’t find elsewhere. I’ve recently been doing a lot of work in Illustrator, and I’m reaching a familiarity level that’s coming close to my main squeeze, Photoshop. One of the little gems I’ve been making a lot of use of is the “Transform Each” function, to be found under the Object menu. (Object>Transform>Transform Each, to be precise)

what it does

The idea behind this function is that if you’ve got a bunch of objects that you want to modify in subtle ways (scale, transform, rotate, and any mix of the above) you can select them all, open up the panel, play around with the settings and modify all of them at once. While somewhat simplistic sounding, the real power behind this feature is when you check the “Random” checkbox in the control panel. Selecting Random allows arbitrary transformations on each object, within the range you set in the panel. That means that each individual object in your selection is modified differently, and this can be a tremendous time–saver when you need to fill an area with a somewhat random pattern.

figure 1
figure 1.
In figure 1 above, I’ve started with a group of squares to use as my source objects. This set was created by placing a single square on the canvas, duplicating it, selecting both new squares, duplicating them, and so forth. Using the power of exponents like this, you can create a set of objects quickly. The colour variations were a simple matter of using the ‘Blend Color’ filters under Filter>Colors. Changing each corner point on the square to a different colour, I selected first the top–most horizontal row and applied the ‘Blend Horizontal’ filter. Modifying each of the four edges in this manner to create reference colours for each row, I then selected each row individually and blended horizontally.
figure 2
figure 2.
Once I had a collection of objects to work with, I selected all of them and loaded the Transform Each dialogue. This is where the fun begins, the result shown in figure 2 above. There are a couple of sliders for controlling the scale and offset, as well as an angle indicator. If random isn't checked, these settings are applied to every object equally, eg. changing the width to 150% of the starting value adjusts all objects selected by that amount. When random is checked, these values represent ranges for the transformations. If width is set to 150%, that means any object can be modified up to 150% of the original starting width, or anything in between the two. All objects are affected differently, but on the same scale. The scaling and angle ranges work much the same way.
figure 3
figure 3.
Interesting things start happening when you use different shapes and repeat the process. There’s also no reason to limit use of this function to just an array of identical shapes — any shape works, and you can of course mix and match freely. The only glitch I’ve found is that grouped objects aren’t treated as groups; each object within the group is acted on individually. This makes it much less effective on anything but simple shapes, unfortunately. In figure 3 above, you’ll notice that each star is given space between it and the next. This is something that had to be done manually after applying the transformation. The function doesn’t distinguish between objects, and overlaps are fairly common. Sometimes this is desired, sometimes it’s not. Adjustments are inevitable when the space is needed, but lucky they’re not too tedious. Well, at least not as tedious as creating each shape individually and transforming it by hand at any rate… Transform Each is a powerful tool, and one of those simple little tricks that can prove to be so effective time and again.

Permalink › | no comments