Show older

"With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us." Zuboff, 2019

more #WIP from The User Condition >>>

We call "user" the person who operates a computer. But is "use" the most fitting category to describe such an activity? Pretty generic, isn't it? New media theorist Lev Manovich briefly argued that "user" is just a convenient term to indicate someone who can be considered, depending on the specific occasion, a player, a gamer, a musician, etc. This terminological variety derives from the fact, originally stated by computer pioneer Alan Kay, that the computer is a *metamedium*, namely, a medium capable of simulate all other media. What else can we say about the user? In *The Interface Effect* Alexander Galloway points out *en passant* that one of the main software dichotomies is that of the user versus the programmer, the latter being the one who acts and the former being the one who's acted upon. For Olia Lialina, the user condition is a reminder of the presence of a system programmed by someone else. Benjamin Bratton clarifies: "in practice, the User is not a type of creature but a category of agents; it is a position within a system without which it has no role or essential identity […] the User is both an initiator and an outcome." (1/2)

Paul Dourish and Christine Satchell recognize that the user is a discursive formation aimed at articulating the relationship between humans and machines. However, they consider it too narrow, as interaction does not only include forms of use, but also forms of *non-use*, such as withdrawal, disinterest, boycott, resistance, etc. With our definition of agency in mind (the ability to interrupt behavior and break automatisms), we might come to a surprising conclusion: within a certain system, the non-user is the one who possesses maximum agency, more than the standard user, the power user, and maybe even more than the hacker. To a certain extent, this shouldn't disconcert us too much, as often with the ability to refuse concides with power. Often, the very possibility of breaking a behavior or not acquiring it in the first place, betrays a certain privilege. We can think, for instance, of Big Tech CEOS that fill the agenda of their kids with activities to keep them away from social media. (2/2)

more #WIP from The User Condition >

In her essay, Olia Lialina points out that the user preexisted computers as we understand them today. The user existed in the mind of people imagining what computational machines would look like and how they would relate to humans. These people were already consciously dealing with issues of agency, action and behavior. A distinction that can be mapped to notions of action and behavior has to do with creative and repetitive thought, the latter being prone to mechanization. Such distinction can be traced back to Vannevar Bush.

In 1960, J. R. Licklider, anticipating one of the cores of Ivan Illich's critique, noticed how often automation meant that people would be there to help the machine rather than be helped by it. A bureaucratic "substitution of ends" would take place. In fact, automation was and is often semi-automation, thus falling short of its goal. This semi-automated scenario merely produces a "mechanically extended men". The opposite model is what Licklider called "Man-Computer Symbiosis", a truly "cooperative interaction between men and electronic computers". The Mechanically Extended Man is a behaviorist model because decisions, which precede actions, are taken by the machine. Man-Computer Symbiosis is a bit more complicated: agency seems to reside in the evolving feedback loop between user and computer. Human-computer Symbiosis would "enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs". Behavior, understood here as clerical, routinizable work would be left to computers, while creative activity, which implies various levels of decision making, would be the domain of both.

Alan Kay's pioneering work on interfaces was guided by the idea that the computer should be a medium rather than a vehicle, its function not pre-established (like that of the car or the television) but reformulable by the user (like in the case of paper and clay). For Kay, the computer had to be a general-purpose device. He also elaborated a notion of computer literacy which would include the ability to read the contents a medium (the tools and materials generated by others) but also the ability to write in a medium. Writing on the computer medium would not only include the production of materials, but also of tools. That is for Kay authentic computer literacy: "In print writing, the tools you generate are rhetorical; they demonstrate and convince. In computer writing, the tools you generate are processes; they simulate and *decide*."

More recently, Shan Carter and Michael Nielsen introduced the concept of "artificial intelligence augmentation", namely, the use of AI systems to augment intelligence. Instead of limiting the use of AI to "cognitive outsourcing" (_AI as an oracle, able to solve some large class of problems with better-than-human performance_), AI would be a tool for "cognitive transformation" (_changing the operations and representations we use to think_).

Through the decades, user agency meant freedom from predetermined behavior, ability to program the machine instead of programming it, decision making, cooperation, break from repetition, functional autonomy. This values and the concerns deriving from their limitation were already present since the inception of the science that propelled the development of computers. One of most present fears of Norbert Wiener, the founding father of cybernetics, was fascism. With this word he didn't refer to the charismatic type of power in place during historical dictatorships. He meant something more subtle and encompassing. For Wiener, fascism meant "the inhuman use of human beings", a predetermined world, a world without choice, a world without agency. Here's how he describes it in 1950:

> In the ant community, each worker performs its proper functions. There may be a separate caste of soldiers. Certain highly specialized individuals perform the functions of king and queen. If man were to adopt this community as a pattern, he would live in a fascist state, in which ideally each individual is conditioned from birth for his proper occupation: in which rulers are perpetually rulers, soldiers perpetually soldiers, the peasant is never more than a peasant, and the worker is doomed to be a worker.

epiphany of the day: when we make a drawing in Illustrator, we are writing with the Illustrator vehicle, but we are merely *reading* the computer medium

#WIP from The User Condition essay 

In the 80's, Apple came up with a cheery, Coca Cola-like [ad](youtube.com/watch?v=JLXjfhtgtf) with people of all ages from all around the world using their machine for the most different purposes. The commercial ended with a promising slogan: "the most personal computer". A few decades afterwards, Alan Kay, who was among the first to envision computers as personal devices<!-- history-computer.com/Library/K -->, was not impressed with the state of computers in general, and with those of Apple in particular.

For Kay, a truly personal computer would encourage full read-write literacy.Through the decades, however Apple seemed to go in a different direction: cultivating an allure around computers as lifestyle accessories, like a pair of sneakers. In a sense, it fulfilled more than other companies the consumers' urge to individualize themselves. Let's not, though, look down on the accessory value of a device and the sense of belonging it creates. It should be enough to go to any hackerspace to recognise a similar logic, but with a Lenovo (or more recently a Dell) in place of a Mac.

#WIP from The User Condition essay 

And yet, Apple's computer-as-accessory actively reduced read-write literacy. Apple placed creativity and "genius" at the surface of preconfigured software. Using Kay's terminology, Apple's creativity was relegated to the production of materials: a song composed in GarageBand, a funny effect applied to a selfie with Photo Booth. What kind of computer literacy is this? Counterintuitively, what is a form of writing within a software *vehicle*, is often a form of reading the computer *medium*. We only write the computer medium when we do not simply generate materials, but tools. A term coined by Robert Pfaller in a different context seems to fit here: *interpassivity*. Don't get me wrong, not all medium writing needs to happen on an old-style terminal, without the aid of a graphical interface. Writing the computer medium is also designing a macro in Excel or assembling an animation in Scratch.

#WIP from The User Condition essay 

Then, the new millennium came and mobile devices with it. At this point, the hiatus between reading and writing grew dramatically. In 2007 the iPhone was released. In 2010, The iPad was launched. Its main features didn't just have to to do with not writing the computer medium, but not writing at all: among them, browsing the web, watching videos, listening to music, playing games, reading ebooks. The hard keyboard, the "way to escape pre-programmed paths" according to Dragan Espenschied, disappeared from smartphones. Devices had to be jailbroken. Software was compartimentalized into apps. Screens became small and interfaces lost their complexities to fit them. A "rule of thumb" was established. Paraphrasing Kay again, simple things didn't stay simple, complex things became less possible.

#WIP from The User Condition essay 

Google Images confirms that we are still anchored to pre-mobile idea of computers, a sort of skeuomorphism of the imagination. We think desktops or, at most, laptops. Instead, we should think of smartphones. In 2013, Michael J. Saylor [started noticing a shift](books.google.nl/books?id=8P4lD): “currently people ask, ‘Why do i need a tablet computer or an app-phone \[that’s how he calls a smartphone\] to access the Internet if I already own a much more powerful laptop computer?’. Before long the question will be, 'Why do I need a laptop computer if I have a mobile computer that I use in every aspects of my daily life?’” If we are to believe the [CNBC](cnbc.com/2019/01/24/smartphone), he was right. Their recent title was “Nearly three quarters of the world will use just their smartphones to access the internet by 2025”. Right now, in the US, a person is more likely to possess a mobile phone than a desktop computer (81% vs 74% in the US according to the [Pew Center](pewresearch.org/internet/fact-)) and I suspect that globally the discrepancy is higher. A person’s first encounter with a computer will soon be with a tablet or mobile phone rather than with a PC. And it's not just kids: the first computer my aunt used in her life is he smartphone. The PC world is turning into a Mobile-first world.

#WIP from The User Condition essay 

There is, finally, another way in which the personal in personal computers has mutated. The personal became personalized. In the past, the personal involved not just the possession of a device, but also one's own know-how, a *savoir faire* that a user developed for themselves. A basic example: organizing one's music collection. That rich, intricate system of directories and filenames each of us came individually up with. Such know-how, big or small, is what allows us to build, or more frequently rebuild, our little shelter within a computer. Our *home*. When the personal becomes personalized, the knowledge of the user's preferences and behavior is first registered by the system, and then made alien to the user themelves. There is one music collection and it's called iTunes, Spotify, YouTube Music. Oh, and it's also a mall where advertising forms the elevator music. Deprived of their savoir faire, the users receive an experience which is tailored on them, but they don't know exactly how. Why *exactly* is our social media timeline ordered? We don't know, but we know it's based on our prior behavior. Why exactly does the autosuggest present us with that very word? We assume it's a combination of factors, but we don't which ones.

#WIP from The User Condition essay 

Let's call it impersonal computing (or … <!-- footnote -->), shall we? Its features: computer accessorization at the expense of an authentic computer literacy, mobile-first asphyxia, dispossession of an intimate know-how. A know-how that, we must admit, is never fully annihilated: tactical techniques emerge in the cracks: small hacks, bugs that become features, eclectic worksflows. The everyday life of Lialina's Turing-Complete user is still rich. That said, we can't ignore the trend. In an age in which people are urged to "learn to code" for economic survival, computers are commonly used less as a medium than as a vehicle. The utopia of a classless computer world turned out to be exactly that, a utopia. There are users and there are coders.

"There are endless possibilities as to what a #website could be. What kind of room is a website? Or is a website more like a house? A boat? A cloud? A garden? A puddle? Whatever it is, there’s potential for a self-reflexive feedback loop: when you put energy into a website, in turn the website helps form your own identity." - Laurel Schwulst

thecreativeindependent.com/peo

"Google’s ideal society is a population of distant users, not a citizenry" Zuboff 2019

#theusercondition #wip

An all-inclusive computer literacy for the many was never a simple achievement. Alan Kay recognized this himself:

> The burden of system design and specification is transferred to the user. This approach will only work if we do a very careful and comprehensive job of providing a general medium of communication which will allow ordinary users to casually and easily describe their desires for a specific tool.

Maybe not many users felt like taking on such burden. Maybe it was simply too heavy. Or maybe, at a certain moment, the burden started to *look* heavier that it was. Users' desires weren't expressed by them with the computer medium. Instead, they were defined *apriori* within the controlled setting of interaction design: theoretical user journeys anticipated and construed user activity. In the name of user-friendliness, many learning curves were flattened.

Maybe that was users' true desire all along. Or, this is, at least, what computer scientist and entrepreneur Paul Graham thinks. In 2001, he recounted: "[…] near my house there is a car with a bumper sticker that reads 'death before inconvenience.' Most people, most of the time, will take whatever choice requires least work." He continues:

> When you own a desktop computer, you end up learning a lot more than you wanted to know about what's happening inside it. But more than half the households in the US own one. My mother has a computer that she uses for email and for keeping accounts. About a year ago she was alarmed to receive a letter from Apple, offering her a discount on a new version of the operating system. There's something wrong when a sixty-five year old woman who wants to use a computer for email and accounts has to think about installing new operating systems. Ordinary users shouldn't even know the words "operating system," much less "device driver" or "patch."

So who should know these words? "The kind of people who are good at that kind of thing" says Graham. His vision seems antipodal to Kay's one. Given a certain ageism permeating the quote, one is tempted to root for Kay without hesitation, and to frame Graham (who's currently 56) as someone who wants to prevent the informatic emancipation of his mother. But is it really the case? The answer depends on the cultural status we attribute to computers and the notion of autonomy we adopt.

We might say, with Graham, that his mother is made less autonomous by some technical requirements she doesn't need nor want to deal with. For her, having a computer functioning like a slightly smarter toaster is good enough. Most of the computer's technical complexity, together with its technical possibilities, are alien to her. They are a waste of time and source of worry to her, a burden. Moreover, in order to continue using her machine, she might be forced to familiarize with a new operating system.

On the other hand, we might say, keeping in mind Kay's vision, that the autonomy of Graham's mother was eroded "upstream", as she has been using the computer as an impersonal vehicle, unaware of its profound possibilities. If you believe that society at large is at a loss by using the computer as a smart toaster, than you're with Kay. If you think that is fair, then you're with Graham. But are this two views really in opposition?

Let's consider an actual smart toaster, one of those Internet-of-Things devices. Your smart knows the bread you want to toast and the time that it takes. But, one day, out of the blue, you can't toast your bread because you haven't updated the firmware. You couldn't care less of the firmware: you're starving. But you learn about it and update the device. Then, the smart toaster doesn't work anymore as it used to: settings and features have changed. What we witness here is a reduction of agency, as you can't interrupt the machine's update behavior. Instead, you have to modify your behavior to adapt to it. Back to Graham's mother: the know-how she laboriously acquired, the desire for a specific tool that she casually developed thorugh time, might be suddenly wiped out by a change she never asked for.

Alan Kay has a motto: "simple things should be simple, complex things should be possible". Above, we focused on complex things becoming less possible. But what about simple things? Often, they don't stay simple either. True, without read-write computer literacy a user is stuck in somewhat predetermined patterns of behavior, but the personal adoption of these patterns often forms a know-how. If that is the case, being able to stick to them can be seen as a form of agency. Interruption of behavior means aborting the update. :workstation:

#wip #theusercondition

The revolution of behavioral patterns is often sold in terms of convenience, namely less work. Less work means less decisions to make. Those decisions are not magically disappearing, but are simply delegated to an external entity that takes them automatically. In fact, we can define convenience as automated know-how or automated decision-making. We shouldn't consider this delegation of choice as something intrinsically bad, otherwise we would end up condemning the computer for its main feature: programmability. Instead, we should discern between two types of convenience: autonomous convenience and heteronomous convenience. In the former, the knowledge necessary to take the decision is accessible and modifiable. In the latter, such knowledge is opaque.

Let's consider two ways of producing a curated feed. The first one involves RSS, a standardized, computer-readable format to gather various content sources. The user manually collects the feeds they want to follow in a list that remains accessible and transformable. The display criteria is generally chronological. Thus, an RSS feed incorporates the user's knowledge of the sources and automatizes the know-how of going through the blogs individually. Indeed, less work. In this case, it is fair to speak of autonomous convenience.

The Twitter feed works differently. The displayed content doesn't only reflect the list of contacts that the user follows, but it includes ads, replies, etc. The display criteria is "algorhythmic", that is, based on some factors unknown to the user, and only very partially manipulable by them. This is a case of heteronomous convenience. While the former is agential since the user can fully influence its workings, the second is behavioral because the user can't.

Broadly, algorhythmic feeds have mostly wiped out the RSS feed savoir faire, overriding autonomous ways of use. The Overton window of complexity was thus reduced. Today, a novel user is thrown into a world where the algorithmic feed is the default, while the old user has to struggle more to maintain their RSS know-how. The expert is burdened with exercising their expertise, while the neophyte is not even aware of the possibility of such expertise.

Blogs stop serving RSS, feed readers aren't maintained, etc. It is not coincidence that Google discontinued its Reader product, with the following message on their page: "We understand you may not agree with this decision, but we hope you'll come to love these alternatives as much as you loved Reader." In fact, Google has been simplifying web activities all along. Cory Arcangel in 2009:

> After Google simplified the search, each subsequent big breakthrough in net technology was something that decreased the technical know-how required for self-publishing (both globally and to friends). The stressful and confusing process of hosting, ftping, and permissions, has been erased bit by bit, paving the way for what we now call web 2.0.

True, alternative do exists, but they become more and more fringe. Graham seems to be right when he says that most user will go for less work. Generally, heteronomous convenience means less work than autonomous convenience, as the maximum amount of decisions is taken by the system in place of the user. Furthermore, heteronomous convenience dramatically influences the perception of the work required by autonomous convenience. Nowadays, the process of collecting RSS feeds URLs *appears* tragically tedious if compared to Twitter's seamless "suggestions for you". :workstation:

"The answer to the question Who knows? was that the machine knows, along with an elite cadre able to wield the analytic tools to troubleshoot and extract value from information." - Zuboff, 2019

On Twitter, we can experience the dark undertones of heteronomous convenience. User Tony Arcieri [developed](twitter.com/bascule/status/130) a worrisome experiment about the automatic selection of a focal point for image previews, which often show only a part of them when tweeted. Arcieri uploaded two versions of a long, vertical image. In one, a portrait of Obama was placed at the top, while one of Mitch McConnell at the bottom. In the second image the positioning was reversed. In both cases the focal point chosen for the preview was McConnell's face. Who knows! The system spares the user the time to make such choice autonomously but its logic is obscure and immutable. Here, convenience is heteronomous.

Does it have to be this way? Not necessarily. Mastodon is an open source, self-hosted social network that at the first glance looks like Twitter, but it's profoundly different. One of the many differences (which I'd love to describe in detail but it would be out of the scope of this text, srry) has to do with focal point selection. Here, the user has the option to choose it autonomously, which means manually. They can also avoid making any decision. In that case, the preview will show the middle of the image by default. :workstation:

(thx @joak for pointing me to this case!)

“The danger that the computer poses is to human autonomy. The more that is known about a person, the easier it is to control him. Insuring the liberty that nourishes democracy requires a structuring of societal use of information and even permitting some concealment of information.” Schwartz, 1989

#theusercondition #wip

Heteronomous convenience is an automated know-how, a savoir faire turned into a silent procedure, a set of decisions taken in advance for the user. Often, this type of convenience goes hand in hand with the removal of friction, that is, laborious decisions that consciously interrupt behavior. Let's consider a paginated set of items, like the results of a Google Search or DuckDuckGo query. In this context, users have to consciously click on a button to go to the next page of results. That is a minimal form of action, and thus, of friction. Infinite scroll, the interaction technique employed by, for instance, Google Images or Reddit, removes such friction. The mindful action of going through pages is turned into a homogeneous, seamless behavior.

#theusercondition #wip

And yet, this type of interaction seems somehow old-fashioned. Manually scrolling an infinite webpage feels imperfect, accidental, temporary if not already antiquated, even weird one could say: it’s a mechanical gesture fitting the list's needs<!-- clarify -->. It’s like turning a crank to listen to a radio. It's an automatism that hasn't been yet automatized. This automatism doesn't produce an event (such as clicking on a link) but modulates a rhythm: it's analog instead of digital. In fact, it has been already automatized. Think of YouTube playlists which are reproduced automatically, or Instagram stories (a model originated in Snapchat that spread to Facebook and Twitter), where the behavior is reversed: the user doesn't power the engine, but instead stops it from time to time. In the playlist mode, "active interaction" is an exception.

#theusercondition #wip

We see here a progression that is analogous to that of the Industrial Revolution: first, some tasks are just unrelated to one another (hyperlinks and pagination, pre-industrial), they are then organized to require manual and mechanical labor (infinite scroll, industrial), finally they are fully automated and only require supervision (stories and playlists, smart factory). Pagination, infinite scroll, playlist. Manual, semi-automated, fully automated. Click, scroll, pause.

Late French philosopher Bernard Stiegler focused on the notion of proletarianization: according to him, a proletarian is not just robbed of the form and the products of their labor, but especially of their know-how.<!-- verify --> Users are deprived of the rich, idiosyncratic fullness of their gestures. These gestures are then reconfigured to fit the system's logic before being made made completely useless. The gesture is first standardized and then automated. The mindless act of scrolling is analogous to the repetitive operation of assembling parts of a product in a factory. Whereas the worker doesn't leave their position, the user doesn't leave the page. Both feature movement without relocation. Furthermore, in the factory, machines are organized according to an industrial know-how which makes it the only one that fully understands the functional relationships between parts. How do we call a computational system organized like such factory? We can call it a platform and define it as a system that extracts and standardizes user decisions before rendering them unintelligible and immutable. In the platform, opaque algorithms embody the logic that arranges data into lists that are then fed to the user. The platform-factory is smart and dynamic, the user-worker is made dumb and static. :workstation:

"The most profound technologies are those that disappear. They
weave themselves into the fabric of everyday life until they are indistinguishable
from it.” Wieser 1991

"The ENIAC itself, strangely, was a very personal computer. Now we think of a personal computer as one you carry around with you. The ENIAC was actually one that you kind of lived inside". Harry Reed quoted in New Dark Age by James Bridle

'“The new power is action,” a senior software engineer told me. “The intelligence of the internet of things means that sensors can also be actuators.” The director of software engineering for a company that is an important player in the “internet of things” added, “It’s no longer simply about ubiquitous computing. Now the real aim is ubiquitous intervention, action, and control. The real power is that now you can modify real-time actions in the real world. Connected smart
sensors can register and analyze any kind of behavior and then actually figure out how to change it. Real-time analytics translate into real-time action.” The scientists and engineers I interviewed call this new capability “actuation,” and they describe it as the critical though largely undiscussed turning point in the evolution of the apparatus of ubiquity."' Zuboff, 2019

Follow

@entreprecariat all described in detached scientific/mechanistic remote/far-off context. i was enthralled with ubiquitous & pervasive computing growing up, but it always meant empowering people, meant giving them control & actions, the machines about us bending a knee in service to let whomever was around temporarily take over. "hyper-personal" computing, and also hyper-situational computing, the user's will stitching itself into the various systems around us.

it's not about analytics & driving change. it's about enhancing & augmenting human agency, giving us wider sets of actions, that span bigger pieces of the world.

· · Web · 0 · 0 · 1
Sign in to participate in the conversation
Cybrespace

cybrespace: the social hub of the information superhighway jack in to the mastodon fediverse today and surf the dataflow through our cybrepunk, slightly glitchy web portal support us on patreon or liberapay!