April 2010 Archives

The blog of future think design consultancy PSFK interviewed me by email.

In the interview I talk about the book (of course) and ThingM's upcoming products. I also took the opportunity to think about how I've noticed the trend of services that provide data streams, rather than just units of data:

I think that there’s a really interesting trend in opening up data sources. Pachube works as a free data stream brokerage that sits on top of TCP/IP and HTTP to provide a kind of semantic resource location technology for small net-enabled devices that has been missing. This kind of data openness is being matched by things such as the US Governement’s open data initiative at data.gov.

The trend I see here is a combination of openly sharing data sources and streams and creating business models around making technology layers that make those data streams meaningful and valuable. Both Pachube and data.gov are a kind of search engine for data streams, rather than documents, which I think is a very powerful concept.

This is definitely related to the discussions around syndication that have been going on for years (since the launch of RSS), to micro-content, and to various services that add structured semantic information to Web-accessible data. However, I think what we're seeing now goes beyond those largely abstract discussions to create a more pragmatic understanding of what it means to create meaningful sources of data, rather than just meaningful units of data.

It means, as my last sentence implies, that there are enough data sources--whether it's sensor data automatically collected, organized and tagged by Pachube or the human-created sources of data presented by data.gov--that we can start having search services for such data. The conversation becomes again about "wrangling" information shadows, as I discussed in my NASIG keynote two years ago.

In that discussion I talked about how journal subscriptions--which are a kind of knowledge white hole, wellsprings of specific kinds of information--represent a model for how information shadows can be organized and managed in the future. Well, it looks like we may be closer to that, and that the wrangling may be a combination of automated tagging and human curation.

Does this mean that Google will soon be automatically cataloging data streams? I'd be surprised if they're not already.

This is Part 4 of a pre-print draft of a chapter from Smart Things: Ubiquitous Computing User Experience Design, my upcoming book. (Part 1) (Part 2) (Part 3) The final book will be different and this is no substitute for it, but it's a taste of what the book is about.

Citations to references can be found here.

Chapter 1: The Middle of Moore's Law

Part 4: The Need for Design

The ubicomp vision may have existed twenty years ago, but throughout the 90s the complexity of the technology overshadowed nearly all consideration of user experience. The design of embedded systems (as small specific-purpose computers were typically called) was the concern of electrical engineers in R&D departments and universities rather than interaction designers in startups and product groups. Just getting the pieces to interoperate was a kind of victory, never mind whether the resulting product was usable or enjoyable.

The lack of precedent for devices that combined computers with everyday objects meant that the experience design for each new object had to start from scratch. Nearly every product represented a new class of devices, rather than an incremental evolution to an existing known device. The final nail in the coffin of 1990s ubicomp was (unexpectedly) the Web: by the middle of the decade it was a known quantity with known benefits and (presumed) revenue models. There were few incentives for designers, companies and entrepreneurs to risk jumping into another new set of technologies that needed to be first understood, then explained to a consumer market.

Thus, the potential within the technology was relatively unrealized in the mainstream. However, something else was happening at the edges, outside of the main consumer electronics and personal computer worlds. Toy designers, appliance manufacturers, car designers and industrial designers realized that the products they were creating could incorporate information processing technology more deeply. These groups already used computer technology, but did not necessarily consider themselves in the same business as computer manufacturers.

Now, the market is changing and the incentives are shifting. The success of Web services on mobile phones demonstrates that networked products stretch beyond a laptop browser. Intelligent, connected toys show that objects with little processing power can exhibit interesting behaviors with just a little networking. The prices for powerful CPUs have fallen below a threshold where incorporating them becomes a competitively viable business decision. The concept of designing a single general-purpose "computation" device is fading progressively into the same historical background as having a single steam engine to power a whole factory. As it fades, the design challenges grow clearer.

Right now is the time to create a practice of ubiquitous computing user experience design. The technology is ready. Consumers are ready. Manufacturers are ready. The world is ready. Now it's up to designers to define what that practice will mean.

And what of the railroads and time? Time zones, a ubiquitous technology we've come to take for granted, were invented in the 1860s, standardized by the railroads in the 1880s and hotly debated until the 1918 Standard Time Act made them US law (O'Malley, 1990). Once trains ran on schedule, they could save countless lives, create enormous fortunes, displace native peoples, pollute the air and transform the world. Ubiquitous computing is poised to be the next such transformational technology.

Next month: Chapter 3

This is Part 3 of a pre-print draft of a chapter from Smart Things: Ubiquitous Computing User Experience Design, my upcoming book. (Part 1) (Part 2) (Part 4) The final book will be different and this is no substitute for it, but it's a taste of what the book is about.

Citations to references can be found here.

Chapter 1: The Middle of Moore's Law

Part 3: Ubiquitous Computing

Like many other prescient observations and innovations (Hiltzik, 2000), the researchers at Xerox PARC identified in the 1980s that technology was part of accomplishing social action (Suchman, 1987) and that personal computers were "too complex and hard to use; too demanding of attention; too isolating from other people and activities; and too dominating" (Weiser et al, 1999). They coined the term "ubiquitous computing" to describe their program to develop a range of specialized networked information processing devices to address these issues.

[Footnote: It's interesting to hypothesize how apparent the implications of following Moore's trend on the size, shape and use of computers were, and who first thought of multiple computers distributed in the environment. Accompanying Moore's original 1965 Electronics magazine article is a cartoon by Grant Compton that shows a salesman hawking a handheld computer alongside stands for "notions" and "cosmetics," with well-dressed men and women crowding around him:

The cartoon's joke is that if Moore's plan is followed, eventually computers will be as small, as common, and sold in the same way as universally consumed personal items. It exaggerates the implications of Moore's article for humor--but perhaps it was funny because it pointed to a hope only implicitly acknowledged.]

Xerox PARC's then Chief Technology Officer, Mark Weiser, described these ideas in his 1991 Scientific American article, "The Computer for the 21st Century." In that article, he contrasts the potential of ubicomp technology to portable computers and virtual reality, then the state of the art in popular computer thought:

The idea of integrating computers seamlessly into the world at large runs counter to a number of present-day trends. "Ubiquitous computing" in this context does not just mean computers that can be carried to the beach, jungle or airport. Even the most powerful notebook computer, with access to a worldwide information network, still focuses attention on a single box.

[…]

Perhaps most diametrically opposed to our vision is the notion of "virtual reality," which attempts to make a world inside the computer. […] Although it may have its purpose in allowing people to explore realms otherwise inaccessible […] virtual reality is only a map, not a territory. It excludes desks, offices, other people not wearing goggles and body suits, weather, grass, trees, walks, chance encounters and in general the infinite richness of the universe. Virtual reality focuses an enormous apparatus on simulating the world rather than on invisibly enhancing the world that already exists.

[…]

Most of the computers that participate in embodied virtuality will be invisible in fact as well as in metaphor. Already computers in light switches, thermostats, stereos and ovens help to activate the world. These machines and more will be interconnected in a ubiquitous network.

Whether or not he used the semiconductor industry's price trends in his calculations, his title accurately anticipated the market. The year 1991, when Weiser wrote his article, was still the pre-Web era of the i486. The vision he described, of many small powerful computers, in different sizes, working simultaneously for one person (or a small group) was simply unaffordable. The economics of processors to make it commercially viable would not exist until well into the first decade of the 21st century (and, sadly, some years after Weiser's premature death in 1999).

I estimate that the era he envisioned began in 2005. Technologies typically emerge piecemeal at different times, so 2005 is an arbitrary date [Footnote: Ambient Devices' Ambient Orb, for example, came out in 2002.]. But in 2005, Apple put out the first iPod Shuffle, Adidas launched the adidas_1 shoe (Figure 1-1) and iRobot launched the Roomba Discovery robotic vacuum cleaner. None of those products looked like a traditional computer, not least because none has a screen. Moreover, the Shuffle and Discovery were second-generation products, which implies that the first generation's success justified additional investment, and the adidas_1 was deeply embedded in a traditionally non-technological activity (running).

Also, by 2005, a range of industry factors made possible the efficient development of products that roughly fit Weiser's vision of ubiquitous computing. No longer did the elements—the software, the hardware, and the networks—have to be integrated from scratch, often painfully, as they had been throughout the 1990s. Starting around 2000, several factors pointed to an emergence of ubicomp as a commercial agenda:


  • CPU technology prices had fallen to the point that information processing had gotten powerful and inexpensive.

  • The Internet had become familiar, with clear social and commercial benefits outside of the scientific and engineering community.

  • A number of standard communication and data exchange protocols had been developed and refined through widespread deployment.

  • Digital telephony was firmly established, and many people were carrying lightweight, network-connected computers, in the form of mobile phones.

  • Wireless communication had become common, standardized and successful, with millions of access points deployed throughout the world.

  • Designers spent the first dotcom boom developing a wide range of interactive products and were experienced with interaction design for networked services.

Thus, the information processing technology was there, the networks were there and, most importantly, technological familiarity among designers, developers and businesspeople was there. By 2005, the fruit of their efforts were in stores and—after nearly two decades of anticipation—the era of ubiquitous computing had begun.

Tomorrow: Chapter 1, Part 4

This is Part 2 of a pre-print draft of a chapter from Smart Things: Ubiquitous Computing User Experience Design, my upcoming book. (Part 1) (Part 3) (Part 4) The final book will be different and this is no substitute for it, but it's a taste of what the book is about.

Citations to references can be found here.

Chapter 1: The Middle of Moore's Law

Part 2: The Middle of Moore's Law

To understand why ubiquitous computing is particularly relevant today, it's valuable to look closely at an unexpected corollary of Moore's Law. As new information processing technology gets more powerful, older technology gets cheaper without becoming any less powerful.


Figure 1-2. Moore's Law (Based on Moore, 2003)

First articulated by Intel Corporation founder Gordon Moore, today Moore's Law is usually paraphrased as a prediction that processor transistor densities will double every 15 months. This graph (Figure 1-2) is traditionally used to demonstrate how powerful the newest computers have become. As a visualization of the density of transistors that can be put on a single integrated circuit it represents semiconductor manufacturers' way of distilling a complex industry to a single trend. The graph also illustrates a growing industry's internal narrative of progress without revealing how that progress is going to happen.

Moore's insight was dubbed a "Law," like a law of nature, but it does not actually describe the physical properties of semiconductors. Instead, it describes the number of transistors Gordon Moore believed would have to be put on a CPU for a semiconductor manufacturer to maintain a healthy profit margin given the industry trends he had observed in the five years earlier. In other words, Moore's 1965 analysis, which is what the Law is based on, was not a utopian vision of the limits of technology. Instead, the paper (Moore, 1965) describes a pragmatic model of factors affecting profitability in semiconductor manufacturing. Moore's conclusion that, "by 1975 economics may dictate squeezing as many as 65,000 components on a single silicon chip" is a prediction about how to compete in the semiconductor market. It's more a business plan and a challenge to his colleagues than a scientific result.

Fortunately for Moore, his model fit the behavior of the semiconductor industry so well that it was adopted as an actual development strategy by most of the other companies in the industry. Intel, which he co-founded soon after writing that article, followed his projection almost as if they was a genuine law of nature, and prospered.


Figure 1-3. CPU Prices 1982–2009 (Data source: Ken Polsson, processortimeline.info)

The economics of this industry-wide strategic decision holds the key to ubiquitous computing's emergence today. During the Information Revolution of the 1980s, 1990s and 2000s, most attention was given to the upper right corner of Moore's graph, the corner that represents the greatest computer power. However, as processors became more powerful, the cost of older technology fell as a secondary effect.


Figure 1-4. Per transistor cost of CPUs, 1968–2002 (Based on: Moore, 2003)

The result of power increasing exponentially as the price of new CPUs remains (fairly) stable (Figure 1-3) is that the cost of older technology drops at (roughly) the same rate as the power of new processors rises (Figure 1-4). Since new technology gets more powerful very quickly, that means that old technology drops in price just as quickly. However, although it may get cheaper, it does not loose any of its ability to process information. Thus, older information processing technology is still really powerful , but now it's (almost) dirt cheap.

[Footnote: This assertion is somewhat of an oversimplification. Semiconductor manufacturing is complex from both the manufacturing and pricing standpoints. For example, once Intel moved on to Pentium IIIs, it's not like there was a Pentium II-making machine sitting in the corner that could be fired up at a whim to make cheap Pentium IIs. What's broadly true, though, is that once Intel converted their chipmaking factories to Pentium III technology, they could still make the functional equivalent of Pentium IIs using it, and (for a variety of reasons) making those chips would be proportionally less expensive than making Pentium IIIs. In addition, these new Pentium II-equivalent chips would likely be physically smaller and use less power than their predecessors.]

Take the Intel i486, released in 1989. The i486 represents a turning point between the pre-Internet PC age of the 1980s and the Internet boom of the 1990s:

  • It ran Microsoft Windows 3.0, the first commercially successful version of Windows, released in 1990.
  • It was the dominant processor when the Mosaic browser catalyzed the Web boom in 1993. Most early Web users probably saw the Web for the first time on a 486 computer.

At the time of its release, it cost $1500 (in 2010 dollars) and could execute 16 million instructions per second (MIPS). If we look at 2010 CPUs that can execute 16 MIPS, we find processors like Atmel's ATTiny (Figure 1-5), which sells for about 50 cents in quantity. In other words, broadly speaking, the same amount of processing power that cost $1500 in 1989 now costs 50 cents and uses much less power and requires much less space.


Figure 1-5. ATTiny Microcontroller, which sells for about 50 cents and has roughly the same amount of computing power as an Intel i486, which initially sold for the equivalent of $1500 (Photo by Uwe Hermann, licensed under Creative Commons Attribution-Share Alike 2.0, found on Flickr)

This is a fundamental change in the price of computation--as fundamental a change as the change in the engineering of a steam boiler. In 1989, computation was expensive and was treated as such: computers were precious and people were lucky to own one. In 2010, it has become a commodity, cheaper than a ballpoint pen. Thus, in the forgotten middle of Moore's Law charts lies a key to the future of the design of the all the world's devices: ubiquitous computing.

Tomorrow: Chapter 1, Part 3

This is Part 1 of a pre-print draft of a chapter from Smart Things: Ubiquitous Computing User Experience Design, my upcoming book. (Part 2) (Part 3) (Part 4) The final book will be different and this is no substitute for it, but it's a taste of what the book is about.

Citations to references can be found here.

The Middle of Moore's Law

Part 1

The history of technology is a history of unintended consequences, of revolutions that never happened, and of unforeseen disruptions. Take railroads. In addition to quickly moving things and people around, railroads brought a profound philosophical crisis of timekeeping. Before railroads, clock time followed the sun. “Noon” was when the sun was directly above, and local clock time was approximate. This was accurate enough for travel on horseback or foot, but setting clocks by the sun proved insufficient to synchronize railroad schedules. One town's noon would be a neighboring town's 12:02, and a distant town's 12:36. Trains traveled fast enough that these small changes added up. Arrival times now had to be determined not just by the time to travel between two places, but the local time at the point of departure, which could be based on an inaccurate church clock set with a sundial. The effect was that trains would run at unpredictable times and, with terrifying regularity, crash into each other.

It was not surprising that railroads wanted to have a consistent way to measure time, but what did "consistent" mean? Their attempt to answer this question led to a crisis of timekeeping: do the railroads dictate when noon is, does the government or does Nature? What does it mean to have the same time in different places? Do people in cities need a different timekeeping method than farmers? The engineers making small steam engines in the early 19th century couldn't possibly have predicted that by the end of the century their invention would lead to a revolution in commerce, politics, geography and pretty much all human endeavors.

[Footnote: See Chapter 2 of O'Malley (1990) for a detailed history of the effect of railroads on timekeeping in America.]


Figure 1-1: The adidas_1 shoe, with embedded microcontroller and control buttons (Courtesy Adidas)

We can compare the last twenty years of computer and networking technology correspond to the earliest days of steam power. Once, giant steam engines ran textile mills and pumped water between canal locks. Miniaturized and made more efficient, steam engines became more dispersed throughout industrial countries: powering trains, machines in workplaces, even personal carriages. As computer shrink, they too are getting integrated into more places and contexts than ever before.
We are the beginning of the era of computation and data communication embedded in, and distributed through, our entire environment. Going far beyond how we now define "computers," the vision of ubiquitous computing is of information processing and networking as key components in the design of everyday objects (Figure 1-1), using built-in computation and communication to make familiar tools and environments do their jobs better. It is the underlying (if unstated) principle guiding the development of toys that talk back, clothes that react to the environment, rooms that change shape depending on what their occupants are doing, electromechanical prosthetics that automatically manage chronic diseases and enhance people's capabilities beyond what's biologically possible, hand tools that dynamically adapt to their user, and (of course) many new ways for people to be bad to each other.

[Footnote: This book will not discuss military ubiquitous computing, although that is certainly a major focus of development. The implication of computers embedded into weapons and surveillance devices has been discussed for as long as ubicomp (DeLanda, 1991), if not longer.]

The rest of this chapter will discuss why the idea of ubiquitous computing is important now, and why user experience design is key to creating successful ubicomp devices and environments.

Sidebar: The Many Names of Ubicomp

There are many different terms that have been applied to what I am calling ubiquitous computing (or ubicomp for short). Each term came from a different social and historical context . Although not designed to be complementary, each built on the definitions of those that came before (if only to help the group coining the term identify themselves). I consider them to be different aspect of the same phenomenon:
  • Ubiquitous computing refers to the practice of embedding information processing and network communication into everyday, human environments to continuously provide services, information and communication.
  • Physical computing describes how people will interact with computing through physical objects, rather than in an online environment or monolithic, general-purpose computers.
  • Pervasive computing refers to the prevalence of this the new mode of digital technology.
  • Ambient intelligence describes how these devices will appear to integrate algorithmic reasoning—"intelligence"—into human-built spaces so that it becomes part of the atmosphere—the "ambiance"—of the environment.
  • The Internet of Things suggests a world in which digitally identifiable physical objects relate to each other in a way that is analogous to how purely digital information is organized on the Internet (specifically, the Web).
Of course, applying such retroactive continuity (a term the comic book industry uses to describe the pretense of order grafted onto a disorderly existing narrative) attempts to add structure to something that never had one. In the end I believe that all of these terms actually reference the same general idea. I prefer to use ubiquitous computing since it is the oldest.

Tomorrow: Chapter 1, Part 2

Ads

Archives

ThingM

A device studio that lives at the intersections of ubiquitous computing, ambient intelligence, industrial design and materials science.

The Smart Furniture Manifesto

Giant poster, suitable for framing! (300K PDF)
Full text and explanation

Recent Photos (from Flickr)

Smart Things: Ubiquitous Computing User Experience Design

By me!
ISBN: 0123748992
Published in September 2010
Available from Amazon

Observing the User Experience: a practitioner's guide to user research

By me!
ISBN: 1558609237
Published April 2003
Available from Amazon

Recent Comments

  • Katherina: Information not just material. In our days it is a read more
  • tamberg.myopenid.com: Hi Mike, totally agree on building the IoT in a read more
  • Mutuelle: Man is the reflections of his thought, some name it read more
  • Amanda Carter: You obviously placed a great deal of work into that read more
  • Molly: You might find it interesting to connect with return of read more
  • George: You might want to change "Size" to "form" for terminal. read more
  • Mike: Thanks for the reminder, Robin. I'm aware of that article, read more
  • Robin: It's a slightly different argument (it predates most work in read more
  • Tim: This reminded me of the Pleo video Mark posted awhile read more
  • michael studli: i was wonting to know is the game fun to read more

About this Archive

This page is an archive of entries from April 2010 listed from newest to oldest.

March 2010 is the previous archive.

May 2010 is the next archive.

Find recent content on the main index or look in the archives to find all content.