Marco.org

I’m : a programmer, writer, podcaster, geek, and coffee enthusiast.

How and when the iMac and Mac Pro can go Retina

Three major factors have probably prevented a proper 2×-in-each-dimension “Retina” version of today’s 27″ iMac and standalone Thunderbolt Display, which would need to be 5120×2880 pixels:

Fortunately, that last one is no longer true at the high end. It’s just the first two holding us back now, but they’re also the slowest to change.

I suspect that 5120×2880 on the desktop is still 2–3 years away.

But I think Apple has already shown us how they’re going to do 27″ Retina. Apple’s not a difficult company to predict if you pay attention. My hopes for a 5120×2880 display for the new Mac Pro were blinding me to the obvious clues to what will very likely happen instead.

Before the 15″ Retina MacBook Pro (the first Retina Mac), power users buying the 15″ usually opted for the “high-resolution” option at 1680×1050 instead of the lower default of 1440×900. But when the 15″ Retina MacBook Pro was released, its panel was a “2×” version of 1440×900 — a step down in logical resolution for power users. To make up for the loss, Apple introduced software scaling modes that simulated 1680×1050 and 1920×1200 by rendering them at double-size and then scaling the image down to the physical 2880×1800 pixels. GPU performance is reduced slightly, but otherwise, these higher-resolution scaling modes are effectively “free” — the pixels are so small that the reduced image quality from downsampling is barely noticeable.

A few months later, the 13″ Retina MacBook Pro was released with the same trick: a relatively low native resolution of 1280×800 logical pixels (2560×1600 physical pixels), but with simulated higher resolutions.

To bring Retina to the 27″ iMac and 27″ Thunderbolt Display, Apple doesn’t need to wait until 5120×2880 panels are available. They can launch them at the next-lowest common resolution and use software scaling to let people simulate it if they want, or display things slightly larger at perfect native resolution.

That next resolution down, of course, is 4K.

The hardware and software to support 4K is already available. Thunderbolt 2 drives it well. Only very recent GPUs can drive it, but only very recent Macs with decent-to-powerful GPUs have Thunderbolt 2.3

All Apple needs to do to deliver desktop Retina is ship a 27″ 4K Thunderbolt 2 monitor and enable the software scaling modes for it in an OS X update.

We just learned that it’s possible to sell a 28″ 4K display that’s at least halfway decent for only $700 and a 24″ for $1300. With Apple’s supply chain, they could almost certainly sell a 27″ 4K Thunderbolt 2 display for $999–$1299 now and a 27″ Retina iMac later this year at pricing similar to today’s 27″ iMac.

I’ve gone back and forth on this, but I think both a Retina iMac and a Retina Thunderbolt 2 Display will be released in 2014.


  1. At 60 Hz, the minimum needed for comfortable general use, and 32 bits per pixel. You might argue that only 24 bits are necessary, but 5120×2880 at 60 Hz and 24-bit still needs 21.2 Gbit/s — more than Thunderbolt 2.

    Another option would be to make a 5120×2880 display that requires two cables, but they’d need to be plugged into two different Thunderbolt 2 buses. This would effectively monopolize 4 of the 6 Thunderbolt ports on the new Mac Pro and wouldn’t be compatible with the recent Thunderbolt 2-equipped Retina MacBook Pros at all. I don’t see Apple doing this. ↩︎

  2. Note from that Wikipedia page that DisplayPort 1.2, the standard that became fast enough to support 4K and has only recently become widespread, was finalized in 2009. ↩︎

  3. Only the new Mac Pro and the late-2013 13″ and 15″ Retina MacBook Pros have Thunderbolt 2. The MacBook Air, iMac, Mac Mini, and non-Retina MacBook Pro still only have first-generation Thunderbolt.

    The Retina MacBooks don’t officially support 4K displays over their Thunderbolt 2 ports, but this appears to be an OS X software limitation only↩︎

Forcing links to open in new windows: an argument that should have ended 15 years ago

I stabbed The Verge briefly in yesterday’s post for using target="_blank" in their link markup, which tells browsers to always open the target in a new window. (Modern browsers usually have options or extensions to put them into new tabs instead.)

A lot of people have asked me to clarify whether forcing links to open new windows or tabs is actually bad behavior (and why), or just outdated markup that could be replaced with a new HTML5 way to do the same thing. I meant the former. People have been arguing about this for over a decade, so I’ll keep my position brief.

Forcing links to open in new windows has two main purposes:

  1. To avoid disturbing an important session in progress for a temporary digression, such as FAQ/documentation links in the sidebar when you’re doing online banking.
  2. To “keep people on your site”, ensuring that even when visitors navigate away, your tab is never closed and the user is forced to interact with it again later. Maybe they’ll let the ads refresh a few more times or click another story!

I believe the former is justifiable, the latter isn’t, and reading a news or blog article does not qualify as an undisturbable session for most people. And I think over a decade of user confusion and frustration resulting from target="_blank" backs that up.

Most people know how to open your article’s outbound links in new tabs or windows, especially readers of a tech site. Modern browsers make multiple-tab/window management very easy for almost everyone who wants them, and the people who don’t know how to manage them usually don’t want them.

The best practice for the modern web is to let people manage their own windows and tabs.

Usable PAR20 LED bulbs

My kitchen is lit by 14 little recessed halogen PAR20 bulbs. They use 50 watts each, so it takes a whopping 700 watts to fully light one of the most commonly used rooms in the house. (I gather that when the previous owners remodeled it in 2005, the use of tons of tiny recessed lights was in style — electricity usage be damned.) And with this many of them, it seems like they’re constantly burning out, losing a $10 bulb every month or two. It’s wasteful and annoying.

I’ve been eyeing LED replacements for a while, but they all looked too primitive until recently: they had giant ugly metal fins, they looked harsh because they lacked good diffusers, and they cast a narrower light beam than the normal halogens.

But LEDs are advancing so quickly that I decided to try two current models that were well-reviewed and install one each, side-by-side with the halogens, for comparison:

Both LED bulbs are about the same size and weight as halogen PAR20s, and both are reasonable-looking.

Both LEDs have a brief but noticeable startup delay of less than a second, like most Philips LED bulbs. I usually find startup delays fatally annoying, but I’ll tolerate it for these since good PAR20 options are limited.

The three bulbs’ advertised color temperatures seem accurate. The Feit is slightly cooler than the halogens, while the Philips is slightly warmer. (It’s a little hard to gauge the colors accurately from the photo since the cabinets are a cream color, warmer than neutral white.)

The Feit is slightly brighter than the halogens. (There’s no way those halogens are putting out their advertised 570 lumens.)

As you can see, the Feit is creating a similarly wide, diffuse beam as the halogens — in fact, it’s doing an even better job of it. The Philips is a bit too narrow on the cabinet doors, although it’s less noticeable at counter level. But you can see that the Philips isn’t very diffuse — look at the difference in the two shadow directions on the cabinet-door handles. Shadows cast by the Feit approximately match the halogen’s shadows, while the Philips shadows are much darker and sharper.

The Philips also has the same odd orange-pink tint as every Philips LED bulb I’ve owned. Some people don’t mind it, but I really don’t like it.

I prefer the Feit overall, and I’m pleasantly surprised that there doesn’t seem to be much of a downside compared to halogens except the slightly cooler color temperature. Finally, PAR20 LEDs are good.

So good that I ordered 13 more, which will pay for themselves in only a year.

The Nest-Google privacy statement

The defensive FAQ by Nest to alleviate widespread fears about the Google acquisition has been quoted extensively. The whole thing (it’s short) is worth examining critically.

Before we dig in, I want to acknowledge what I consider the first great Nest partnership. I’m not talking about Google. I’m talking about the one between you and the team here at Nest.

How patronizing.

Let’s all give each other a big hug! We’re just a $3 billion company that just got acquired by a massive advertising conglomerate that controls, tracks, and records an ever-expanding amount of everything that people do with all modern technology. We’re all friends and everything’s great. Give yourselves a big round of applause for being such great customers!

Keep this next part in mind:

Will Nest and Google products work with each other?
Nest’s product line obviously caught the attention of Google and I’m betting that there’s a lot of cool stuff we could do together, but nothing to share today.

This is fairly straightforward and obvious: the division between Nest and Google products and services will blur, leading to merged products in the future. Of course. Why would Google buy them if they didn’t have something like that in mind?

The definitions of “products” and “services” can be broad. Maybe the thermostat is still a Nest-branded product, but the weather service it connects to is a Google service. Habit analysis and prediction could also be a Google service. (It already is.) Obviously, Google and Nest should be considered one entity with one product line and shared services in the future, regardless of whose name is painted on the front of the thermostat.

So when I see so many people only quoting this part and thinking it changes anything, it’s pretty easy to have a more cynical (and realistic) interpretation:

Will Nest customer data be shared with Google?
Our privacy policy clearly limits the use of customer information to providing and improving Nest’s products and services. We’ve always taken privacy seriously and this will not change.

Statements like this should be interpreted as if you’re a lawyer trying to find a loophole. (Because theirs will.)

“This will not change” only refers to “We’ve always taken privacy seriously”. In other words, the sentence only says “We will always take privacy seriously”, which doesn’t mean anything and should be disregarded. So we’re down to this:

Will Nest customer data be shared with Google?
Our privacy policy clearly limits the use of customer information to providing and improving Nest’s products and services.

The question in bold is not answered by the following sentence, or anywhere else. Asking a question without answering it is a diversion, containing no information, so it can also be removed. We’re left with only one sentence that actually says something:

Our privacy policy clearly limits the use of customer information to providing and improving Nest’s products and services.

It’s meant to sound reassuring, but their privacy policy can change whenever they feel like it. And remember, the definition of “providing and improving Nest’s products and services” can be very broad.

Think of how much more accurate your Nest thermostat’s predictions could be if it integrated with a few Google services.

If you’re using Google’s services enough to give them a pretty good idea of where you are and what you’re doing,1 Nest could automatically turn your heat on so it reaches the ideal temperature at exactly the time you’re most likely to arrive home based on your location, travel speed, the route you usually take, and current traffic conditions. How clever and impressive! It’s even environmentally friendly!

Google won’t break into your home. You’ll invite them in.


  1. An interaction could be something implicit and obvious, like using Google search or Maps to navigate or find something. Or it could be something you don’t expect to give Google any tracking information, like viewing a web page with an AdSense or +1 embed, but that’s probably enough.

    They have so much data about you, your browser, your phone, your computer, and your current IP address that it doesn’t take much for them to make a pretty good guess at who you are, where you are, and what you’re doing most of the time. ↩︎

BugshotKit

Last summer, I spent about two weeks writing a little app called Bugshot to help me report and track app rendering bugs during the iOS 7 transition, learn the iOS 7 style a bit myself, and jump-start my iOS development motivation again after losing my soul to paperwork all spring. It succeeded at all of those.

It was also an experiment in pricing: I launched it at $0.99 — my first app ever at that price — to see how it would sell once the launch buzz died down. It was abysmal. Average daily revenue in November was $4.18, and in the first half of December, it had fallen to just $3.28.1

As a money-maker, it was a failure, but it was very educational. I decided to learn one more lesson from it: a month ago, right before the iTunes Connect holiday shutdown, I quietly made it free and didn’t tell anyone. Some of the app-tracker sites picked it up, which caused a huge spike in downloads:

But like all App Store trend lines, it quickly went down, settling at around 50 downloads per day recently, and I bet it will continue to fall.

Now I’m taking the Bugshot experiment in a different direction: open source.

I didn’t want to just open-source the entire app. That may be slightly educational to some, but it wouldn’t be very useful.

Instead, I’ve taken the last few days to build BugshotKit.

I’m starting the Overcast beta soon,2 and I wanted an easy way for my testers to report (non-crash) bugs and provide UI feedback. I also wanted a way to remind myself of UI or feature ideas easily, and I’ve occasionally needed to view the error console on the device when tracking down difficult bugs.

BugshotKit addresses all of these: it’s an embeddable Bugshot annotation interface and console logger, invoked anywhere in your app by an otherwise unused gesture (e.g. a two-finger swipe up, a three-finger double-tap, pulling out from the right screen edge, etc.), that lets you or your testers quickly email you with helpful details, screenshots, and diagnostic information.

The GitHub page has more about it if you’re interested.

It contains most of the interesting Bugshot code,3 some interesting console stuff, and some useful functionality to developers.

I’ve learned a lot from Bugshot, and I’m still learning from it. Now, I hope I can share some of the benefits with other developers.


  1. From its launch in July until I made it free in December, Bugshot made a total of $4,724.67. This was heavily front-loaded, making $4,088.11 in July and August alone. ↩︎

  2. I’ve never had so many requests to be on a beta before. I appreciate the interest, but I’ve gotten far more requests than the 100-device limit. ↩︎

  3. I’ve changed the blur tool from a “pixelate” style to a real blur because the pixelate filter used GPUImage, which, despite being awesome, is a massive dependency. I didn’t want BugshotKit to have any dependencies. ↩︎

“We’re Just Flipping Through Index Cards”

Myke Hurley’s recent podcast interview of John Roderick is excellent.

At 39 minutes, Myke asked how music promotion works today. I’ve quoted part of John’s response, with some slight editing and paraphrasing to read more easily:

[40:30]
[Ten years ago,] you were dependent on this whole cultural architecture of magazine writers, newspaper writers, college radio, commercial radio, public radio… and if your record got into the stream, and the right person liked it and talked about it, then pretty soon you’ve created a storm of interest that started with one or two people who decided that this record was something that really mattered.

If you couldn’t get those people to take an interest in your record — because of course everybody in the world knows who those few people are, and they’re inundating them with albums — if you couldn’t get that person to take the time, or if they just didn’t like it, then you’d be struggling, grasping at every opportunity to get someone further down the food chain to take an interest in this album. …

[43:07]
Well, five years ago, all of a sudden the conventional wisdom started to change. “Oh, no, we don’t have to do any of that anymore! You just put it on the internet, everybody listens to it, and ‘the crowd’ decides! And you don’t have to do any of that bullshit anymore. You can just tweet about your record, and everybody’s going to listen to it and love it!”

And for a brief moment, when the internet was still comprised mostly of all the right people, it was just the cool kids that were on there. Clap Your Hands Say Yeah could put out a record on Myspace, and the cool kids would all get it.

But, of course, that window was short-lived. Now, we’re back to a world where everybody’s on the internet, and nobody cares. Nobody’s following your tweet link to your record anymore! Except your fans, people who already like you.

My Twitter feed is now 85% links to people’s Kickstarters and YouTube videos. And I only follow people I know! Imagine following your favorite bands — it would be never-ending. Everybody’s trying to promote themselves the same way.

The problem is now, if you hire a publicist, what are they doing? They’re just tweeting about it, too, because the magazines are gone, the record stores are gone… it’s anybody’s guess how to promote a record now. …

[45:28] I hate to sound curmudgenly, but … what is inevitable is that the mean quality of everything is declining. In the early ’70s, it was very expensive to make a record, and you had to be really good at it to even get into the studio to give it a shot. The record companies were very selective, and the music that made it all the way out to the marketplace was astonishingly good. Think about the music that came out between 1962 and 1972: what an astonishing quality of music, in every genre. Ten different genres of music were invented and perfected.

Now, we live in a world where there are probably more records coming out this week than what came out in all of 1967. All of that quantity probably hasn’t produced a single record that was as good as the worst record from 1967. Everything is easier to make, so more people are making it, the standard is so much lower for what you need, and it’s a confusing din.

As a culture, we are satisfied with worse, because there’s so much more of everything.

When a Marvin Gaye record came out 40 years ago, presumably, you went and spent your record-buying allowance on it, and you brought it home and listened to it exclusively for 2 weeks. It was an investment. This was it! You’re going to listen to this, or you’ve got an AM radio and a newspaper.

Now, we’re just clicking through songs. “How does this one sound? Oh, that’s good. How does this one sound? Pretty good. This one’s good.”

We’re just flipping through index cards.

This is equally true in all media today, including software.

This is why a hundred other sites are trying to be Daring Fireball, why everybody’s starting a podcast, and why nobody’s buying your app in the App Store.

The democratization of media production and distribution over the last few decades has worked incredibly well. Overall, it’s a net win for society. But the downside is that everything’s now extremely crowded.

There’s a lot of money and attention out there to go around, but there’s also a lot more competition for everything.

Apparently It’s OK For iOS Apps To Ask For Your Apple ID And Password

Apple’s currently featuring the Sunrise app in the App Store.

Upon first launch, Sunrise invites you to create an account, then asks you to add a calendar. The first option, “iCloud Calendar”, brings you to a screen where the Sunrise app itself, in its native interface and code, solicits your Apple ID (iCloud) email address and password.

This is apparently OK.

I first saw Neven Mrgan point this out (good replies there), with some additional commentary from Michael Tsai. I couldn’t believe it, so I downloaded the app myself and took these screenshots.

Sunrise claims that they’re not storing the credentials and are instead just getting a login token of some sort from iCloud. (It’s unclear whether they’re transmitting your email and password to their servers and getting the login token from there, or doing the exchange from the device.) But that doesn’t matter at all.

No app or website should ever be asking for a high-security username and password directly, especially given how much is tied to your Apple ID. What year is this?

It’s downright dangerous that Apple not only let this through app review, but is promoting it.

To my surprise, there’s no rule against doing this. That needs to change immediately.

(Update here with a Sunrise response, sort of.)

Starting Your Own Podcast Ad Network

Brent Simmons got some great advice out of Lex Friedman about how to run a small podcast-ad network that needs to exist, and previously did, but currently doesn’t. They’re both calling for someone to make a network like that again.

Podcast geeks jumping into ad sales is a recent phenomenon, mostly out of necessity because there’s no AdSense for podcasts. And trust me, the world is better off without that.

AdSense (and later, better web ad networks like The Deck) removed much of the reason for most independent bloggers to join blog networks. The lack of accessible ad networks in podcasts, and the sheer amount of work it takes to sell directly, is a huge reason why podcast networks are relevant and common. If you could easily outsource just the ad-sales part separately, and change it at will if things don’t work out with someone, there’s much less reason to join a podcast network. You could host your files on SoundCloud or Libsyn, put the show on Squarespace,1 use your own domain name, teach yourself basic editing in GarageBand (it’s easy), and be completely independent.

I tried selling the sponsorships myself in the early days of ATP, but was quickly demoralized and jaded by the reality of that job. It takes a lot of email, some long phone calls, a lot of paperwork, and a lot of nagging to get past-due invoices paid. It’s common for sponsors to ignore your payment-due dates and pay months after you actually do the sponsorships. Most big sponsors have their own way they “need” to work, blaming “the accounting department” or “policy”, and these arbitrary accounting rules and policies often mean that your ad salesperson has a lot of work to do and you’re not getting paid for a long time. (This dance isn’t new to most contractors.)

Even in the best cases, working with the easiest sponsors, you still need to schedule each sponsorship, bill for each sponsorship, get a script or bullet points (which sometimes needs a phone call), prepare those materials while recording each show, read them during the show, link to them on the website, and follow up with them afterward if necessary. It’s a lot of work.

I say all of this not because I dislike sponsors — they pay far more than we’d be likely to get from direct listener donations, and if we forced a paywall, nobody would listen and our show would be irrelevant. If we didn’t have sponsors, we wouldn’t make enough to be worth the substantial time it takes to produce the show. Sponsorships enable podcasting as we know it today. There are some exceptions, but not many that anyone has heard of.

But the job of a podcast-ad salesperson isn’t trivial. That’s why we outsourced it, and why it’s worth losing a significant percentage of our income to commissions to do so. I wouldn’t recommend getting into this business unless you’re prepared to deal with, and would enjoy, the day-to-day reality and tedium of it.

And if you can handle people like me bugging you every week to get our invoices paid.2


  1. If you don’t care about download stats, you can even host the files on Squarespace. But if you’re selling ads, you’ll need download stats.

    Squarespace could take a lot of Libsyn’s customers if they offered podcast download stats, but it doesn’t look like they want to be in that business at a bigger scale than what they’re offering now. ↩︎

  2. Lex is a very patient, tolerant man. ↩︎

Long-Form

Jonathan Mahler’s When “Long-Form” Is Bad Form in The New York Times this weekend has generated a lot of discussion. I saw it as a sloppy collection of disparate rants with mixed validity, but one resonates:

The problem is that long-form stories are too often celebrated simply because they exist. And are long. …

When you fetishize — as opposed to value — something, you wind up celebrating the idea of the thing rather than the thing itself.

Mahler quotes from (but doesn’t link to, for no good reason) last month’s Against “Long-Form Journalism” by James Bennet, editor of The Atlantic. It’s much better-written and goes deeper into the “long-form” issue:

In the digital age, making a virtue of mere length sends the wrong message to writers as well as readers. …

As a writer, I used to complain that my editors would cut out all my great color, just to make the story fit; as an editor, I now realize that, yes, they had to make my stories fit, and, no, that color wasn’t so great. The editors were working to preserve the stuff that would make the story go, to make sure the story earned every incremental word, in service to the reader. Long-form, on the Web, is in danger of meaning “a lot of words.”

I faced a lot of pressure when running Instapaper to embrace the “long-form” fetish, which I resisted as much as possible. With whatever influence I had by starting the read-later-app genre, I tried to take the focus away from length and more toward context switching.

Read-later apps let you separate reading from finding, since they ideally happen with different mindsets and environments. This is necessary not because browsing aggregators, timelines, and feed readers is given too little time — people happily devote hours to it — but because the goal is to “get through” them and keep checking for new items, keeping readers in a skimming, active, dismissive mindset that’s hostile to attentive reading.

Instapaper’s usage data backed this up: there was almost no correlation between article length and number of saves on Instapaper. People routinely saved everything from three-paragraph Lifehacker posts to 10,000-word feature articles. The most-saved sites were usually just the most popular sites read by the kind of people who knew about Instapaper, not just the longest articles they found.

Nobody was saving Lifehacker posts because they couldn’t read three paragraphs right then: they saved them because they wanted to attentively read them, which wasn’t going to happen in their current context.

But the “long-form” fetishization exploded around me, despite my efforts to separate read-later apps from that term.

Skimming fluffy articles and social timelines all day is like eating junk food all day. Eventually, you feel horrible, burn out, and just want something real. After decades of evolution, experimentation, and testing, web producers have honed the formula for addictive junk content to perfection. We have infinite junk available to us on demand, on any subject, from small rectangles available in our pockets, all day, every day. It’s no surprise that a growing number of people have begun fetishizing salads.

The problem is that long doesn’t mean good — it just doesn’t look like most of the junk. Too many people now ask for (and produce) “long-form” when they really want substantial. It’s entirely possible to be substantial without being long, and good editors have helped writers strike that balance for centuries. Emphasizing and rewarding length over quality results in worse writing and more reader abandonment.

Smart writers, editors, and publishers will recognize the difference and give people what they really want, rather than what they’re asking for.

Microsoft Customers Always Win

In 2011, when Windows 8’s “Metro” was in beta and everyone had high hopes for it, I wrote:

The question isn’t whether Metro will be good: it probably will be. … But how will their customers react?

Will Metro be meaningfully adopted by PC users? Or will it be a layer that most users disable immediately or use briefly and then forget about, like Mac OS X’s Dashboard, in which case they’ll deride the Metro-only tablets as “useless” and keep using Windows like they always have?

Microsoft’s customers don’t like change. They’re accustomed to getting everything they want, exactly as they want it, with no surprises. They won’t tolerate anything else.

If Microsoft releases anything too different, enterprise customers will simply refuse to buy it, demanding that Microsoft keep selling the old version indefinitely. And every time, Microsoft caves, because what else are they going to do? They’re desperate for upgrade revenue from business customers who see diminishing returns and increasing transition costs with each new version of Windows and Office.

Today, The Verge reports (via Moltz):

While the software giant originally released Windows 8.1 last year with an option to bypass the “Metro” interface at boot, sources familiar with Microsoft’s plans have revealed to The Verge that the upcoming update for Windows 8.1 will enable this by default. Like many other changes in Update 1, we’re told the reason for the reversal is to improve the OS for keyboard and mouse users.

Bye, Metro.

It’s not that Microsoft is incapable of making radical changes. Not only was Windows 8 the most bold move they’ve made since Windows 95, but it wasn’t even bad. It wasn’t great, but it wasn’t bad. Microsoft truly innovated with the UI to a greater extent than we’ve ever seen from them.

Their customers, as usual, smacked them back into line.

Adoption has been abysmal, PC manufacturers are advertising Windows 7 downgrades as features (much like they did from Vista to XP), and the Surface tablets have sold very poorly. Windows 8 has been one of the biggest disasters in Microsoft’s history.

They did everything that the press, analysts, and prevailing wisdom at the time were telling them to do. Everyone was pressuring them to be more like Apple, so they tried.

The problem isn’t that they botched it (although they did, in some ways). The problem is that Microsoft isn’t Apple, and Microsoft’s customers aren’t Apple’s customers. They tried selling a more Apple-like attitude to their customers, most of whom don’t want and won’t tolerate an Apple-like attitude. That’s why they’re not Apple customers.

Microsoft’s customers have always demanded, and will always get, exactly what they ask for. That’s the reality of serving the low- to midrange-PC business, and it’s sure as hell the reality of the enterprise business.

Microsoft’s biggest strategic mistake over the last five years has been forgetting who their customers really are.