It could work: a 3rd party email client for OS X

Brent Simmons started a discussion yesterday about email apps for OS X. To summarize: Apple Mail doesn't do enough for everyone, and the alternatives aren't so great either. But because it is free, there's no incentive for a third party to do better.

Paul Kafasis agrees, saying "Don't compete with Free and Don't compete with Apple". He draws the comparison between email apps and browsers, where there is very little money to be made as a result of the many excellent free browsers out there. He also compares the situation to music players - it seems like everyone's got a pet feature they would add to iTunes, so there ought to be an audience for a couple music players, except competing with iTunes is just not a good business plan.

I do think there is a market for a pro email client for OS X, and I'll use another core app category to explain - Text Editors. I think they are a better analogy than music or browsers. Shipping in every Mac, TextEdit is a solid basic free editor, but everyone needs something more. Some people need styles, grammar checking, layout control, and graphics - so they move to Word, Pages, or Clearly, there's money to be made there, if only by Microsoft. Other people need regex search and replace, code completion, syntax checking, block editing, etc. - so they move on to a programmer's editor. How is the market for programmer's editors? XCode is free and very good, emacs, vim, etc. are also free and excellent. But there are people making money selling text editors. People buy BBEdit, TextMate, and SubEthaEdit because these programs have important features that give you more power over something that they do all day. TextMate is my favorite example here, because it benefits from community involvement with bundles and plugins to customize and add power.

Does that sound familiar? It should - many of us spend more time than we'd like reading and answering email. For some, it's their whole job. An email client that had unique and compelling features for professionals, knew its audience, had strong Mac-like design, and supported community extension, would be successful, just like TextMate.

Imagine if there was just a single bundled text editor for Macs, and we had to use it for writing everything from programming to business reports to family letters. Wouldn't it be annoying when an update came around and they added stationery and voice notes when you wanted refactoring support and better version-control integration? "Email client" isn't just a single app category, and it's about time someone realized it.

The read-once email client and reference emails

I've been dreaming of a new kind of email client, one that only lets you look at a new email once. That's right - you get to scan it for 30 seconds and then you have to do something with it or it gets archived out of sight. And you can only look at one email at a time. I think it'd be a great way to focus on getting your inbox empty and doing something useful with each message.

Doing something would be replying to it, archiving or deleting it, creating a todo about it or sending it to a notes program for reference. I really think that last one's important: Email clients aren't for storing notes - send it somewhere else where you can link it up and annotate it more easily.

I don't have any suggestions about how to make your email client do this, but I have come up with something for making reference emails more useful, using VoodooPad.

I've been using Mail Act-On as described by Merlin Mann to quickly move messages to appropriate mailboxes, and I've been able to keep a clean inbox. But - I never bother to look at the emails I send to the reference folder. They're basically useless without more context.

I just wrote a quick script to send selected emails to voodoopad as a new page so I can link to them, add notes, and then later, search those reference emails in the same context as my notes. It's already made a big difference in how useful emails are - I can add comments to myself, find related notes in my VP doc, and since there's now a URL for every page in my VP docs, I can even link to reference emails from outside of VP - like in iGTD.

If you're interested in the script, it's here: NewVPPageFromEmail.scpt (control-click and save-as).

HPC blogs and news sites

I've always liked the programming-languages community website Lambda the Ultimate, and recently I went looking for something similar for the High-Performance Computing community. I didn't find exactly that*, but I did find a few great resources for news about HPC and computing research policy:

HPCWire is a well-known news source for HPC. It has daily news updates, and occasional columns by guests from around the industry. Most of the news is press releases from companies and government labs, but it's nice to have a single place to check for them. However, there is apparently no RSS feed, and to get email updates, you have to buy a subscription. I haven't, but I do check back occasionally to read the columns.

Supercomputing Online is another professional news outlet, which reads a little less like a press release, seems to have more coverage of academic news, and does have an RSS feed. is more of a weblog than either of the first two - John E. West covers news stories in brief with some added perspective and analysis. I like his approach, and I've joined him to help cover academic HPC issues, both computing research and issues affecting computational scientists.

One of many blogs at Sun is the HPC Watercooler, covering Sun's HPC products and services, as well as some non-Sun related news. I've found it pretty interesting already, and I'd be interested to see weblogs from other HPC vendors.

Finally, a couple of blogs that are less directly related to HPC but still very relevant for computing researchers, are Dan Reed's weblog at the Renaissance Computing Institute, and the CRA Computing Research Policy Blog, both of which cover computing research policy and funding issues that don't often show up in news coverage of either government or computing.

* - if anyone wants to start an LTU-alike site for HPC research, or point me to one, I'll sign up and contribute in an instant.

The TRIPS processor

The UT-Austin TRIPS project will be unveiling their new processor next Monday. (event details)

This is a pretty interesting attempt to get around the problems facing processor design today. Clock speeds have stalled, but the actual Moore's Law - the one about transistor count, not "speed" - is still going, so we have the problem of what to do with just a lot of copies of basically the same old chip?

A lot of answers you hear involve pushing that complexity up to the programmer, forcing more people to become parallel programmers. This is almost certain to happen at least a little, but let's hope we don't have to give up on the sequential programming model completely. If you think software is bad now…

The TRIPS processor is an example of another approach - placing more of the burden of finding and using parallelism onto the compiler and architecture, keeping programmers' heads above water. It's pretty exciting to see something this different make its way into actual silicon.

The basic idea is that instead of a single piece of control logic organizing the actions of multiple functional units, finding concurrency within a window of instructions using reordering, the TRIPS processor is distributed at the lowest level - each functional unit is a mini-processor (called a tile), and instructions executing on separate processor tiles communicate operands directly, not through a register file. Usually this is described as executing a graph of instructions instead of a single instruction at a time.

Current processors certainly don't just execute one instruction at a time, and they do plenty of moving instructions around, so I tend to see this explicit-data-graph description as just the far end of a spectrum that starts with old superscalar designs, continues through out-of-order processors and multithreaded architectures, and currently seems to end here.

A TRIPS processor can run four thread contexts at once, with an instruction window of 1024 instructions to reorder and 256 memory operations in flight at once. For comparison, the late '90s Tera MTA ran 128 threads at once (128 different program counters), and the 2003-vintage Cray X1 processors kept track of 512 memory operations at once. Just like TRIPS, each of those architectures required extensive compiler support for good performance.

A particularly interesting point is the fully partitioned L1 cache - meaning that there are multiple distributed L1 caches on the chip, so where your instructions are physically executing will be important for performance - if they're near the cache bank holding their operands, they will execute sooner.

The natural question when looking at a new and interesting architecture like this, especially one that promises a tera-op on a chip, is whether it will make its way to a laptop you can buy anytime soon. I have no idea if the UT team has any industry deals in the works, but I would bet against something like this becoming mainstream quickly - the fact that these architectures rely so much on a custom compiler with aggressive optimization means that a lot of dirty work is required to move existing software to it.

It will be interesting to follow this project and see how their actual hardware performs.

Hack Like a Champion Today

A little local flavor

Originally uploaded by michael.mccracken.

The UCSD CSE department moved into a new building more than a year ago, and it still kind of feels like a hospital. Clean walls, no character.

My labmate Jon and I decided to try to do something about it, and this picture shows the result.

For the record, we shocked ourselves by getting this done completely through appropriate channels. We asked people in charge, and they were down with it. Like true champions.

Announcing Skim: Stop printing - Start Skimming.

If you spend a lot of time reading articles and research papers that you get in PDF form, then you might be interested in the latest app from the folks who brought you BibDesk. If you already use BibDesk, then you certainly want to take a look.

Even though we keep our research papers stored on disk as PDF, all too often we print them out to read and write notes on. There's something missing in the experience of reading papers on a computer, but it doesn't have to be that way.

Announcing Skim. Skim is a PDF reading and note-taking app for Mac OS X that is designed to make reading research papers and manuals better. Just like in Preview, you can search, scan, and zoom through PDFs, but you also get some custom features for your workflow:

  • Snapshots: if there's a graph on page two and the description continues to page three, just draw a box around the graph with the command key down and a snapshot window pops up with the graph, and you can keep on reading with the graph in view. For more fun, minimize that snapshot window - they stick around in their own dock in the document window.

  • Tooltips: If a PDF has links, such as for citation references or indexes and section headings, you can click on them as usual to go to the destination, but there's more - hover the mouse over those links and Skim will show you a tooltip with the target of the link. No more losing your place to peek at a citation! For more fun, command-click on a link to pop up a snapshot window showing the link's destination.

  • Presentation and Full-screen Modes: Full-screen reading is handy. So is showing a PDF as a presentation. But they're a little different. For instance, you might not want to show the table of contents in a presentation, but it's nice to see it when you're just reading by yourself. So Full-screen and Presentation are separate modes in Skim.

There's plenty more - download it and take a look, and join the mailing list to discuss it. There's even a full help book in the first public beta release!

Many thanks to everyone who has worked on this app, and especially to Christiaan Hofman, who moved the app from a prototype to something really useful faster than I would have thought possible.

A script for text placeholders in VoodooPad

Last year I wrote about my new page template for VoodooPad. I still use something like it - I like the uniform look and the built-in navigation starters I get in every page.

I got tired of all the clicking around it took to fill in the navigation every time I put in a new page, so I decided to write a script to mimic XCode's "Select Next Placeholder" command. In XCode, if you use code completion, you might get something like this: [dict setObject:<# (id) anObject #> forKey:<# (id)key #>] Then pressing Control-/ cycles the selection through those placeholders so you can replace them with whatever you want quickly.

That's really handy for code, and it's great for VoodooPad templates too. I wrote the script as a Python script plugin for VoodooPad, and it maps Command-/ to select the next placeholder, wrapping the search at the end just like XCode does. Now my new page template in VoodooPad has a few placeholders in it, and I have a lot fewer pages with default template text sitting in there making me look lazy.

Download it here. (Note, it needs the VoodooPad Python Plugin Enabler )

Stallman at UCSD

Saint IGNUcius at UCSD

Richard Stallman gave a talk at the UCSD CS department yesterday, and packed the auditorium despite just one email announcement the afternoon before. There were some people I recognized from SDSC, plenty of students and faculty, and even a bunch of people from the nearby tech companies. His talk was about his philosophy of Free Software, which you can read more about here - I went to the talk to see him in person, even though I was pretty sure I'd heard most of what he had to say on the topic already, from his writing on the web and his biography, "Free as in Freedom", by Sam Williams.

As I expected, there wasn't much new (for me) about his main points, but it was interesting to watch him make his argument very carefully, proceeding logically, as if constructing a proof. He spoke for more than an hour, completely without notes and without pause. It was a great example that if you have a strong point to make and believe in your topic, bullet points are completely unnecessary.

I was a little surprised that he was completely accepting of proprietary software if you never release it - in the Q&A; someone asked him about a situation where their company had internal software that was a competitive advantage, and about his opinion of the ethics of keeping that secret. He said that as long as there were no users of the software who don't have the four freedoms, there's nothing wrong with it. If it only has one user, it's free enough if that user never shares it.

He did allow that if there were some custom software that would be very useful and important to society, he might argue it'd be unethical to not release it, but said that was a separate issue.

His entire crusade is about the freedom of "your computing". I noted that he made a pretty fine distinction about what your computing was. Someone asked him about "apps running on the web". He pounced on that phrasing, describing it as "a confusion", pointing out that the software's running on someone's server. He then defined the computing that's being done when you visit a web site as "their computing", and said it's as free as it needs to be, at least as long as it's not your data - so, for example, all Google software is free enough by his definition - but he recommends against using GMail because you don't have control over your data, and it's more your computing at that point (but it's a separate issue from the software's freedom).

An interesting talk. If you're interested in the history of computing, and especially in the ethical issues of software, I'd suggest reading some of his essays. Even if you don't entirely agree with its philosophy, it's important to know about the social movement behind the software that runs the web, and most likely runs parts of your personal computing.