Finite and Infinite Terms: the trouble with optimistic names

IMGP3632 IMGP3711

As has been obvious in the recent past, I've been a bit focused on how and why disciplines, especially disciplines relating to ubiquitous computing, are named what they are. I'm not a language precision pedant most of the time--words mean what we want them to mean, when want them to mean those things and to the people we want to understand--but the titles of large ideas have a particularly strong impact on how we think about them. They, in effect, set agendas. If the scientists had called Global Warming something else, say "Global Weather Destabilization," that would have changed a lot of our expectations for it. People wouldn't nitpick about whether one degree is a lot or a little or whether an unusually cold winter in Michigan means that it's all a sham.

Similarly, what we call disciplines we involve ourselves in sets a lot of expectations for the agenda of those disciplines. Lately, I've been thinking about why "ubiquitous computing" has such problems as a name. When I talk about it, people either dismiss it as a far-future pipe-dream, or an Orwellian vision of panoptic control and dominance. I don't see it as either. I've never seen it as an end point, but as the name of a thing to examine and participate in, a thing that's changing as we examine it, but one that doesn't have an implicit destination. I see it as analogous to "Physics" or "Psychology," terms that describe a focus for investigation, rather than an agenda.

Why don't others see it the same? I think it's because the term is fundamentally different because it has an implied infinity in it. Specifically, the word "ubiquitous" implies an end state, something to strive for, something that's the implicit goal of the whole project. That's of course not how most people in the industry look at it, but that's how outsiders see it. As a side effect, the infinity in the term means that it simultaneously describes a state that practitioners cannot possibly attain ("ubiquitous" is like "omniscient"--it's an absolute that is impossible to achieve) and an utopia that others can easily dismiss. It's the worst of both worlds. Anything that purports to be a ubiquitous computing project can never be ubiquitous enough, so the field never gets any traction. The mobile phone? That's not ubiquitous computing because it's not embedded in every aspect of our environment and doesn't completely fade into the background. A TiVo can't be ubiquitous computing because it requires a special metaphor to explain it. The adidas_one shoe isn't ubicomp because it doesn't network.

The problem is not with the products, it's with the expectations that the term creates.

I see this problem with a lot of terms: artificial intelligence has "intelligence" as part of it, so nothing can be AI until it looks exactly like what we would call intelligence. Machine learning, that's not AI because it's just machines doing some learning. That's not intelligence. Pervasive computing can't exist until we have molecule-sized computers forming utility clouds, because nothing can be pervasive enough until then. Ambient intelligence is an amazingly bad term using this metric: TWO words with implied infinities.

As Liz (Goodman, my wife and fellow ubicomp researcher ;-) points out, when these terms are coined, they are created with a lot of implicit hope, with excitement and potential designed to attract people to the potential of the ideas. But after the initial excitement wears off (think AI in the 1970s) they create unmeetable expectations as the initial surge of ideas gives way to the grind of development, and setbacks mean that the results are never as ubiquitous, intelligent, pervasive, or whatever, as observers had been led to believe. AI was doomed to be a joke for a decade (or more) before they renamed themselves something that implicitly promised less, so they could deliver more.

So what to do about this? Well, I've done a couple of things: I've used one term ("ubiquitous computing") rather than creating ever more elaborate terms to describe the same thing, and I've tried to use it to describe the past as well as the future. In my past couple of lectures I've been arbitrarily setting the beginning of the era of everyday ubicomp as having started in 2005. It's not something in the future, it's something that's in the past and today. Is that a losing battle? Do we need to rename "ubicomp" something like "embedded computing product design," something that promises less so that it can deliver more? Maybe. I still like the implicit promise in the term and its historical roots, but I recognize that as long as it has an infinity in part of its term, there will always be misunderstandings. Some people (like the folks in New Songdo City) will actually try to create the utopian vision, and invariably fail. Some will criticize the field for even trying, while at the same time doing the same thing under a different name.

Me, I'm going to keep calling it "ubiquitous computing" or "ubicomp" until it's either clear that the costs of sticking with the name overweight the benefits I believe it has, or until a better term, one that's less likely to let everyone down, comes along.

(the title of the blog post references Finite and Infinite Games, a book I've never read, but which friends of mine tell me is quite good)

[2/18/09 Update: Michiel asked me (in email, because I have blog comments turned off) what I thought about "The Internet of Things" as a term. I've written about it before and I think it's a pretty good term. It's not as unbounded as the terms I mentioned. "Internet" is something people are familiar with and "things" is a large set, but not an infinite one. There's some internal confusion because "the internet" is seen as ephemeral, and it's hard to imagine how that ephemeral idea translates to the very literal world of "things." Likewise, there's an implication that all things will become part of this new internet, which is also potentially confusing. However, those criticisms aside, I don't think it's a bad term, but only if it's defined well and used precisely. I don't think it's exactly the same idea as ubiquitous computing, for example, since I see it as more about individual object identification and tracking, rather than smart environments, or ambient displays. If it starts to be yet another synonym for ubicomp, its value will diminish.

William sent me the following note:
Interesting observations! Two related bits:

One is Martin Fowler's thinking on "semantic diffusion":

http://martinfowler.com/bliki/SemanticDiffusion.html
http://martinfowler.com/bliki/FlaccidScrum.html

Another was a recent conversation I had at the Prediction Markets conference with an econ professor. He mentioned that the incentives are such that whenever a term develops a positive value, people attach themselves to it until its value swings negative. I think that basic model is too simple, but from it you can develop a richer model that explains a lot of what people get up to with terms.

I like the economic idea, though I agree that it (feels) too simple. ]

No TrackBacks

TrackBack URL: http://orangecone.com/cgi-bin/mt/mt-tb.cgi/316

Ads

Archives

ThingM

A device studio that lives at the intersections of ubiquitous computing, ambient intelligence, industrial design and materials science.

The Smart Furniture Manifesto

Giant poster, suitable for framing! (300K PDF)
Full text and explanation

Recent Photos (from Flickr)

Smart Things: Ubiquitous Computing User Experience Design

By me!
ISBN: 0123748992
Published in September 2010
Available from Amazon

Observing the User Experience: a practitioner's guide to user research

By me!
ISBN: 1558609237
Published April 2003
Available from Amazon

About this Entry

This page contains a single entry by Mike Kuniavsky published on February 15, 2009 12:02 PM.

Smart Things: an outline was the previous entry in this blog.

ETech 2009: The Dotted-Line World is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.