Skip to: Navigation | Content | Sidebar | Footer


Full Archives

Current Events

September 30

Conferences and upcoming presentations, including a Carson Workshops event in January.

Wow! After a relatively quiet summer, all of a sudden the web is exploding with real-life events planning.

First and foremost, I'm pleased to announce that Carson Workshops is presenting a one day workshop event in London at the end of January. I'll be spending a day sharing some of the things I've been thinking about lately. We'll be covering mostly CSS- and design-related topics, and I'll be building a layout from Photoshop mockup to dynamic, CSS-based site in real time.

If you'd like to come check it out, make sure to register right now, immediately after reading this. There's a limited Early Bird discount which will expire tomorrow, so don't miss out. Whoops, the dates were wrong. Early bird extends until November 1st. You've still got time.

Elsewhere on the globe, right now at this very moment, Web Essentials '05 is happening down in Sydney. You might remember I had nothing but good things to say about last year's, which was my privilege and pleasure to kick off with a keynote. This year's is bigger and better, and if you're not there now you're missing out on a good thing. But you can follow along remotely through Flickr and podcasts of the various presentations. Looks like another smash hit. Congrats to John and Maxine and Russ and Peter and the rest of you, I can't wait to read the follow-ups.

Longer range, everyone's must-do event of the year, SXSW 2006, is heating up already. An Event Apart is about to kick off with an inaugural stop in Philadelphia in December. And the 10th UI Conference runs in just a few weeks in Cambridge, MA.

I'll be at SXSW 2006 in March, no question. If you can only make it to one, that's the one to pick. There's always plenty of hype surrounding SXSW, but once you attend you'll understand why. It's continuing to grow every year, with a growing international audience. (Well for that matter, I guess I'm an international attendee myself.) At the moment I haven't lined up any panels for this year, so feel free to get in touch if you're looking for a co-presenter.

There are a few others I'll be attending. Web Design World 2005 is happening again in Boston this December, where I'll be presenting a pair of talks on web typography and CSS Project Management.

And one I'm really excited about is a presentation called 'Working with Cascading Style Sheets' I'll be delivering here in Vancouver in February at the CASE District VIII Conference 2006. CASE is the international Council for Advancement and Support of Education, and the people attending will be educators from facilities around western Canada and various states in the Pacific Northwest, people who will be taking the ideas they learn back to their students and fellow staff.

These are the people responsible for teaching future web designers and developers, people who may or may not have upgraded their curriculum to reflect current web development trends. Even if just in a small way, it seems like this is a great opportunity to help get the standards message out to the next generation. I'm really looking forward to this one.

Permalink › | no comments

Web Server Backup

September 22

Get a full local backup of your remote web server with some basic command-line interaction and rsync. Bonus for OS X users: clickable icon backup goodness.

This is one of those handy tricks I discovered way too late, which some of you may not already know.

Problem: You have a web server located somewhere not physically close to you. You use FTP to send and receive files. You're generally okay with this setup, except for one little chink in the armor: backup. Even if you don't run remote scripts which generate files on the server (I'm looking at Movable Type here) which you never remember to backup, sooner or later your local copy and the server will lose synch.

Solution: How about a way to backup a perfect copy of the remote server, incrementally, so that each new update only downloads the files that have changed (and not the whole multi-gigabyte site)? It's as great as it sounds.

Caveat: Though your local computer can run any OS, this only works if the server itself is Unix-based, and you have shell access. If your site runs on IIS, it won't work. If your host doesn't provide you with a shell account, it won't work. In theory, your shell login should be the same as your FTP account, but not necessarily. You may want to get in touch with your host to verify your settings.

Warning: The most important things you should pay attention to are the various path settings. If you get them wrong, and somehow end up moving files to or from the wrong spot, data could become corrupted awfully quickly. The first time you run this, make sure you also have an alternate method of recovering from data loss. Just in case.

Basic Necessities

To pull this off, we need to dip into some Unix hackery, which is a bit scary for those of us used to the cushy buttons and checkboxes of a GUI. If you're on OS X or Linux, you've already got everything you need. Open up the Terminal in the former, or the command line in the latter. (If you're using Linux, presumably you already know how to get a command line and I don't have to explain this further, not that I could anyway.)

If you're on Windows, you're going to need some extra software, namely something called an "rsync client". Though it's probably overkill, grab Cygwin for now—which is a command line environment that comes with a set of powerful tools, all very much like what you get in a Unix-based OS—and you'll get rsync with it. Install, then run Cygwin and you should be taken to a Unix-like command line.

Finding Your Local Backup Directory

So we should all be on the same page at this point, with a command prompt greeting us (shown below). If you already know how to get to your backup directory on the command line, skip ahead to the header "Running rsync".

A basic command prompt.

Now we want to find the directory that will house our backed-up site. This can be anywhere on your local system, and getting to it is going to depend largely on your computer's configuration. In my case, I have a partition on my hard drive called 'Shine', which is mounted as a separate volume. This is the equivalent to calling a partition the G: drive in Windows. So let's begin at the root (otherwise known as /) of our system by issuing the "change directory" command: cd /.

A command prompt after running the cd / command

We can take a look at what's in the root by issuing the "list" command: ls.

Directory listing after the ls command was issued

Where exactly to go from here depends on your OS; on a Mac, partitions are stored under /volumes. Under Cygwin on Windows, the User Guide should help you figure out where you need to go. So if we're working on a Mac, let's change the directory to /volumes and take a look at what's in it using the ls command again:

Issuing the ls command lists two directories, Sparkle and Shine

On my system we see two volumes, Sparkle and Shine, which correspond with my local partitions. I'm going to skip the ensuing directory drill-down to find my ultimate destination, but by continuing to use cd and ls to navigate your file system, find the directory you'll be storing your backup in. (You can either create it ahead of time with the file manager in your OS, or use the Unix mkdir command once you're in the parent directory.) Your prompt will likely be the current path, if not you can display the path by invoking the pwd command:

The working directory is /Volumes/Shine/Personal/mb-backup

Running rsync

Now we're ready. I'll cut to the chase and just show you right now what you're going to be typing (more or less), and explain it afterward:

Let's break it down piece-by-piece.

rsync - the program name itself, this is just causing it to run.

-aze - these are three options we're specifying. a sets archive mode, which does things like preserve permissions and use relative paths. z compresses file data to speed up the transfer. e allows us to connect to a remote server. There are more options available, but these are the essential ones for what we're trying to accomplish.

ssh - ssh, or secure shell, is a method of securely connecting to a remote server. The previous e option told rsync that we wanted to do so, and ssh is the protocol we're going to use to do it.

username@ - this is your username on the remote server. Again, this may be similar to your FTP program's login, or it may not. You'll want to contact your host if you don't know what your shell login is.

67.19.16.228 - this is the IP address of your web server. You likely won't be able to just enter yourdomain.com here, so using your IP address is the best bet. However, that's a pain when you don't have a static IP, so alternatively this can also be the name of your host's server. I can use aristotle.multipattern.com in place of an IP address, for example.

:/home/username/public_html/ - this is the full server path to the root of the directory you want to back up. Note the preceding colon, this is important for separating the IP address from the server path. By full server path, I mean you need to know where your site sits within the filesystem of the remote server. You might be able to find this with your FTP program by continuing to navigate up in the hierarchy until you can go no further; then simply chain together the resulting directories you navigated through until you get a full path back down the hierarchy to your web site's root. Otherwise, you may need to contact your host for the full path.

. - and finally, an important trailing space followed by a single period. This indicates the current local path, which is where we navigated to earlier. Alternatively you could skip the initial step of finding this on the command line and use an absolute path here instead of a period, ie. /Volumes/Shine/Personal/mb-backup.

At this point, if you have the correct data entered, you should be ready to go. Hit return, and if the server is found, it will prompt you for your password. Enter it, then wait. The first sync will take quite a while.

If everything is working properly, it will appear that nothing is happening; when rsync has finished synchronizing, the command prompt will simply pop up again with no message one way or another, and you'll be able to view the results by issuing an ls command. If you don't see your entire remote server's contents now on your local hard drive, something has gone wrong. (For some reason on OS X, I get a message informing me that "stdin: is not a tty". It doesn't seem to affect the backup though, and everything else runs as expected.)

Aliasing your Backup

That's about it if you don't mind entering the command manually every time you want to backup. But you can also create an alias or a shell script for the entire command that will make life a little easier. In this case, make sure to use the full absolute path on your local server instead of the period, so that the scripts are callable from anywhere.

Aliasing involves opening up your shell user profile. There are a bunch of different Unix shells, bash being a more common one. Each will have its own profile naming scheme. In bash, this is .bash_profile, and creating an alias means adding a line like this with your own settings: (make sure it's all on one line)

alias backup='rsync -aze ssh 
	username@67.19.16.228:/home/username/public_html/ 
	/Volumes/Shine/Personal/mb-backup'

The user profile file itself is stored in your home directory, which is most likely the directory that loads when you first open up the command line—if not, you can get to it with the command cd ~. It may be difficult to open a file with a preceding period in Windows (if Cygwin even uses this format); unfortunately I can't really be of much more help here, so the User Manual is once again your friend.

Assuming you've managed to create the alias, you can now invoke the backup simply by typing backup on the command line.

OS X Shell Script

We can take it one step further in OS X though, and create a clickable icon for the backup. This involves opening a text editor and creating a new text file, which we'll save as a shell script. Enter the following as the contents of the file, replacing with your own settings where appropriate:

#!/bin/bash
rsync -aze ssh 
	username@67.19.16.228:/home/username/public_html/ 
	/Volumes/Shine/Personal/mb-backup

The latter three lines are identical to the command-line we generated earlier, and should all be on one line. Save this file wherever you want it, but make sure to give it a ".command" extension. Also very important, make sure that the line break formats are Unix, not Macintosh or DOS.

Once you have this file saved, you'll need to make sure you have executable permissions on the file. Open up the Terminal again and find the directory you've saved it in, then issue this command:

chmod 744 filename.command

The very last step will probably be necessary, depending on your system configuration. In the Finder, right-click (Ctrl-click if you have to) on the file and select "Get Info". In the "Open with" menu, select Terminal from the list. Close the dialogue, and you're done.

Now whenever you wish to backup your server, all you need to do is double-click the icon and enter your password. If it's not working as expected, check out this tutorial on executable scripts for more help.

Finally, if this simple set of Unix commands is brand new to you, you may also wish to look into the ability of ssh to lock down your mail, especially if you use a wireless internet connection of any kind.

There's gold in the Unix command line. It's worth learning.

Permalink › | 31 comments

CSS Optimization

September 16

Some thoughts on how to go about writing a CSS optimizer.

I received an enthusiastic message off the contact form this morning from someone writing an online CSS optimization tool. So I had a look.

The idea of compressing CSS/HTML isn't a new one; around 2000, I spent a bit of time writing a "cruncher" for my bloated markup that would strip whitespace and extraneous characters, and I know I was even following in someone else's footsteps then.

But the classic problem with markup optimizers is that they're lossy, which results in two problems.

First, you're saving only a fraction of the file size which was otherwise lost to unneeded characters. It's not true compression, it's only a shrinking of what gets sent over the wire. The savings aren't substantial, especially in light of true server-side compression like gzip. And CSS gets cached anyway, the initial download is a one-time hit. Hard to argue that the 2k or so which is only saved on the first visit to a site, and not on subsequent visits, is worth it considering this next problem.

Second, it's a one-way process; once you've stripped those characters, you're not getting them back. CSS isn't like GIF, in the sense that there's a separate original file (PSD, PNG, AI, whatever) that you can easily edit if you need to change a GIF. The CSS file is the original; try editing one that has been through the compression process, you won't like yourself very much for having done so. Post-production readability is not something to discard lightly.

With the cruncher I built (the Visual Basic source for which is long lost, so don't bother asking), the second problem was an important one to solve otherwise I never would have bothered. Assuming the source markup followed a certain set of white space formatting rules (tabs for indentation, multiple tabs for multiple levels of indentation, etc.) you can programmatically re-create it after the fact. And despite some exceptions, I managed to get this working rather well. One button was for compressing, another was for de-compressing; a file going through that process won't look exactly like the original, but it did look like something a human could edit.

What I find particularly interesting about this new compressor is that, being CSS-specific, it takes advantage of the cascade to combine like rules:

#header h2 {
	font-size: 2em;
}
#sidebar h3 {
	font-size: 2em;
}

The above code block would compress to something like so:

#header h2,#sidebar h3 {font-size:2em;}

64 bytes of code in the first example (including carriage returns) versus 39 bytes in the second example. The possible optimization really depends on selector lengths being reasonable though, as I could imagine longer selectors duplicated for different properties potentially causing the file size to grow, if they're really messy. Still, in general, you might be able to shave off a quarter of your file size or more.

So how would one go about writing a good CSS and HTML optimizer that I might consider using? I'm not sure you could. As evidenced by the fact I lost interest in the one I was writing, even if you manage to gracefully reverse the process, the extra time it takes to de-compress/re-compress every time I edit the file invalidates the relatively small benefits of using one. And everyone viewing my source would think my return key broke.

If, however, you were to write a just-in-time server-side post-processor (and throw in a few more hyphenated words while you're at it) that doesn't break CSS caching, then we might have something. Consider what Shaun Inman did with CSS SSC—having a script parse my CSS file just as it's about to get sent down the wire, so that the original CSS file I wrote doesn't get compressed (and thus I can continue to edit it without having to do anything extra), seems like the only way this could work.

Questions still linger about the ultimate effectiveness, given gzipping, and whether the parsing overhead is worth it. But a proof-of-concept that puts those concerns to rest could be an interesting experiment. Any takers?

Permalink › | 49 comments

Zen Archives++

September 7

I'm happy to report the Zen Garden archives have (finally) undergone a major re-tooling.

The css Zen Garden has been desperate for some TLC for a while now. Submissions have been increasing steadily (sometimes 6 or 7 per day), but management and archiving has been a frustrating problem for quite some time. So I fixed some of the major problems.

Categorization

Since its launch on this domain over two years ago, the css Zen Garden archives have undergone numerous changes to cope with the growing volume of designs.

It seems hard to believe now, but at one point in time all designs I had received fit on one page. When that page grew too weighty, I chopped it into a handful of categories and gave each of those their own page. Then they had to be further subdivided into their own series of pages.

However, the categories themselves have become mostly meaningless, with 'Conceptual' becoming a default placeholder for just about any non-official design, unless I found a way to cram it into one of the other categories instead.

But fixing the problem either meant way more work for me, or adding a large degree of automation to the publishing process. So I opted for automation.

Automation

After spending the better part of last year writing a book on the site, it became obvious that my manual addition process was going to need to be addressed, and soon. So in January or so I started tinkering with MySQL databases to see what I could come up with.

By February, all new submissions were going into a database. Not a very good database, mind you, but it was enough to allow me to write a quick manager for the site during the spring. All submissions went into a database, which allowed me to categorize and automatically generate HTML that I could copy and paste into various templates. Handy, but not good enough.

About a month back, I needed help screen-scraping the list of Zen Garden designs. This was to facilitate Version 2 of the manager, which would contain a full database of all Zen Garden designs, so that publishing would be automatic from now on. And barring some publishing settings which will take a bit longer to enable, it's finally done. The publishing workflow is almost completely automated, so all I have to do upon categorizing a design is hit the big 'Publish' button. (Official designs take a bit more manual work, but that's a necessary evil.)

Archive Improvements

So now that I can update the site without planning my schedule in advance, what can we expect?

Well immediately, you'll see that the archives finally have screenshots. It's been a long time coming, and the lack of having them has bugged me too. Now that I spend less time updating, I can spend a bit more time to grab these for the official designs. This is a manual process, which involves Safari and Photoshop—until it can be automated on the server as well (and I have my doubts this is possible, barring an API from BrowserCam or similar), screenshots will only exist for official designs. We'll see what this does to my bandwidth though...

As well, I can do fun things like creating 'recently added designs' lists, and even offer an alternative archive format in the 'View Designs By Date' page. This is a bit broken for existing designs due to missing publish dates—creating metadata manually is no fun at all, thanks—but it ought to become a lot more useful as new designs are added.

The categorization problem above? So fixed. Now instead of a handful of meaningless categories, designs can exist in any number of the 50+ categories which describe things like layout type, dominant colours, and themes. I'm basically treating them as tags, and can add new ones at any time. For now, I've only re-categorized the 180 official designs, but as new ones come in I'll be more accurately categorizing any and all that get published. The new problem is going to be categorizing the categories; I've reserved a 'group' column in the table for precisely that, whch will be implemented in due time.

What else? Oh, a couple new RSS feeds, which are going to be 'experimental' for now. The designs listed are screwy at the moment (expect them to be completely innaccurate) and I'm not 100% convinced they do what they're supposed to yet, that will take a few updates to figure out. But you can expect they'll eventually work.

Long Term

So that solves a few problems, and leaves a couple of questions hanging about what has yet to be done:

How about Comments/Ratings?
I've considered comments, but I'm not going to do it. Browse any major CSS design listing site to experience first hand why not. Anonymous commentary-free ratings might be a possibility, but very low priority.
Ads?
No. Well, kind of. Since I've spent far more (billable) time than I should have on this upgrade, I'm likely going to run some ads on the archive pages for a few months. Only to recoup the costs, and then they'll be removed. I have a specific dollar value in mind, so they'll only run until that has been reached. If you have a cool product that's relevant to the type of people likely to be viewing the Zen Garden, get in touch. Otherwise it'll just be AdSense.
Why are the archives still on mezzoblue.com?

Legacy and overhead. The site was originally launched on this domain, and since the purpose of the site is to re-skin a common body of content after all, picking just one of those and just using it for the archives and supporting material seemed a little silly. But forcing submitters to style more variations would have been more time-consuming for them, and the barrier to entry would be a lot higher. This was a real concern at the time, so I dropped the supporting material in this site's template to side-step the problem.

At some point I think there will be a 'master' design for the supporting material, which will then see it and the archives pushed over to the csszengarden.com domain. At that point, the page that greets you when you visit the site will change to more of an explanation/design listing, to solve some of the lingering problems around the fact that I can't change the markup structure these days. But I'm not quite ready for that step yet.

Is it still relevant?

Truth be told, my idea of what the site is has changed quite a bit recently, perhaps yours has too. Keeping in mind that it's a lot of things to a lot of people, and designers new to CSS are still stumbling across it for the first time every day, its original purpose hasn't gone away. But for those of us who get CSS design now, and that's a significant lot of us, the initial impact has long since faded.

What was reinforced to me as I went through the archives for screenshots and categorization, is what a wide and diverse body of work we've created. Hundreds of designers are solving the exact same problem in so many diverse and unique ways. That work continues to be relevant, and although much has changed about the techniques and coding we use in the past few years, a good design is still a good design. And there are a lot of them. And there will continue to be more of them.

As it says in the wording of the site, which is long obsolete, "One day this gallery will be a historical curiosity; that day is not today." This may be true of the markup that makes up the site (more divs and classes than necessary today), the site navigation (woefully inadequate), and even the wording itself (oh how I wish I could edit), the main point of the site—the designs people submit—will continue to be relevant as long as I continue to receive them.

Permalink › | 22 comments