What is physical computing

From Knowledge Kitchen
Jump to navigation Jump to search


The term “physical computing” was coined by NYU ITP professor Dan O’Sullivan in 1991, to describe his use of hardware hacking to extend the physical inputs and outputs of the then-increasingly popular personal computer. Let's break-down the term Physical Computing to understand why it is applied to this field, and how to accurately describes the tools and practices therein.


Computing - what is it?

"Computing is, broadly, a term describing any goal-oriented activity requiring, benefiting from, or creating computers." -Wikipedia

Okay, so what's a computer?

"People don't have buttons and a computer does" - Some little kid from PS-272 in Brooklyn in the 1980s

And buttons are"?

"A button is a switch. [... It] can only be activated by being pressed [...]" -Minecraft Wiki

What does it mean to be physical?

"having material existence" or "of or relating to the body" -Mirriam Webster

Fine. So then what is Physical Computing?

"A lot of beginning computer interface design instruction takes the computer hardware for given " namely, that there is a keyboard, a screen, perhaps speakers, and a mouse [...] In physical computing, we take the human body as a given, and attempt to design within the limits of its expression."

-Tom Igoe, Professor at NYU ITP, and co-author, with Dan O’Sullivan of Physical Computing: Sensing and Controlling the Physical World with Computers

What does it meant to be human?

"Oh geez. That's a topic for another course. But, by the time the department agrees to let me teach it, the answer might have changed." -Anonymous Public Intellectual

Sure. But, so, Physical Computing only deals with designing interfaces for humans?

"No. With due respect, let’s forget what Tom Igoe said. Physical computing is any goal-oriented activity requiring, benefiting from, or creating interactions between a body having material existence and a body capable of doing computations." -Your Instructor

In practice

In actuality, physical computing is nothing more than a set of common tools and practices that enable artists, designers, and hobbyists to create electro-mechanical works that can sense aspects of the physical world and effect changes to it in ways beyond those offered by the standard desktop, laptop, or mobile computer.

Physical computing practitioners today use electronic components and physical materials to build devices that interact in some way with the physical world. These practices are heavily based on prior work done in robotics involving electronic sensors to detect physical input, microcontrollers to reformat and process that input, and actuators to effect changes in the physical world in response to that input

Robotics and physical computing practitioners are often faced with a similar challenge: to build custom computing devices that sense and engage the physical world in an affordable way. And physical computing, like robotics, can often involve a compendium of basic skills from fields such as mechanical engineering, electrical engineering, industrial design, and computer science.

But the goal of physical computing, unlike robotics, is usually not to design human intelligence into electronic systems.

Science and technology is in a constant state of flux

Technological and scientific developments are constantly changing our perspectives and raising interesting questions about the future of computing and what it means to be human.

Contemporary culture and law is not always synchronized with the moral, ethical, and biological implications of the latest research. Some simple examples:

These developments and the many others can easily lead you to believe that the nature of computing and human physicality are in constant flux.

Obviously, future changes in human physicality and computation will have an impact on how artists, designers, and hobbyists design interactions between people and computers. So the practice of physical computing will adapt as emerging science and technology becomes lower in cost, safer, and more commercially available.

What advantages do computers offer to interaction design?

Complexity of interaction

You can hook up a switch directly to a light bulb without a computer. Flipping the switch turns on the light bulb. You probably still have lights like this at home.

With a computer stuck between the switch and the light bulb, you can make the relationship between flipping the switch and turning on the light bulb more complex by programming the computer. For example, you can only turn on the light bulb every 5th time the switch is pressed, or only on Mondays, or you can have the light turn on only dimly during the day and brightly at night. In other words, you can design the interaction between light bulb and switch any which way you want, as long as you know how to program what you want.

Linear vs. non-linear media

You can store sound and images on film or magnetic tape without a computer. When you play the film or tape, it generally plays from start to finish in the same chronological order as the original recording, unless you chop it up and tape it back together.

With a computer and random access memory, you can store sound and images as discrete samples and play them in any direction and any order you want, non-destructively. In other words, you can use existing programs or write your own programs to help you reorder the sounds or images however you want, whenever you want.

How we sense and act on the world around us

This homunculus creatively shows how much of your brain"s cortex is devoted to each of the sensory and motor abilities of your body. This gives you an idea of the relative importance of each part of our body in terms of sensing and acting on the world around us, i.e. your sensory and motor cortical maps.

cortical_homunculus

Source: Wikipedia

How a standard computer thinks we sense the world

How we see ourselves is fundamentally different from how we have designed our standard computers to see us. Contemporary computers are designed as if we only had a finger, an eye, and ears. All you can do is look at a 2-dimensional screen, click or touch a mouse or touchscreen, and hear low quality sound from low-fidelity speakers. The following homunculus, from Tom Igoe's and Dan O'Sullivan's book, Physical Computing, shows how the standard computer sees our sensorimotor capacities.

computer_homunculus

Source: Physical Computing: Sensing and Controlling the Physical World with Computers

If we could design computers to better match our own perceptual and motor abilities, we would have far more expressive computational devices.


What links here