Monthly Archives: September 2009

Net Neutrality – Is all data created equal?

net_neutrality2

In principal, the bits of information required to display this blog should reach you as fast as any other information accessed on the Internet. It shouldn’t have to wait in line while your Skype call is coming through and it also shouldn’t be privileged over, let’s say, other (not so interesting) blogs. That’s what they call “net neutrality”. Continue reading

Election reporting – Turning bar charts into a multimedia show

Elefantenrunde

It was federal election time in Germany yesterday. Since this blog isn’t primarily about political commentary, I shall refer you here for a more detailed summary of the results, if you’re interested. In a nutshell, Angela Merkel’s conservative Christian Democrat party (CDU) will form a new centre-right alliance with the pro-business Free Democrats Party (FDP). Since this blog is primarily concerned with communication and all its related matters, I took a closer look at how the election night was reported by mainstream and social media. Continue reading

Teachers, students, technology – New and shifting boundaries

classroom

I just quickly want to advertise a brilliant article I found in yesterday’s Guardian supplement about how new communication technologies change the relationship between teachers and students. The starting point is the most recent moral scandal in the UK which saw a female teacher being jailed for having an affair with a 15-year-old girl. A large number of the text messages they had exchanged were used as evidence in the case.

The Guardian article by John Henley offers a very balanced and nuanced perspective on how teachers and students have started to interact through new technologies. It’s all about boundaries that were once clearly established and now seem to become permeable. It’s about questions such as “Should I be friends with my students on Facebook?” or “Is it okay to send them emails?”.

When teachers and students suddenly meet in some virtual space, there are risks for both parties. So far, most public attention has focused on teachers who find themselves ridiculed on some video website or photo blog. Germany recently witnessed a court case in which a teacher had sued against an online portal which allows students to grade their teachers. The case was lost, but sparked a controversial discussion about any kind of rating websites, from doctors to travel companies.

As several cases cited in the Guardian article illustrate, students are also at risk when teachers use these new media to approach them in an indecent fashion. Oftentimes, social networking sites and other virtual spaces cannot offer enough control over the interactions they enable. This problem clearly extends beyond the teacher-student relationship into online child pornography in general.

What does it mean to be a teacher?

The bigger picture here is not so much about being ridiculed or indecent contact with minors. It’s about the changing role and self-understanding of teachers in an age of free-flowing information. It will no longer be possible for them to guard their classroom as a little island where they enjoy unchallenged authority over what knowledge gets circulated and how students learn. Teachers and schools will need to adjust to a new information environment in which they provide guidance on how to deal with these massive amounts of information.

That includes opening themselves up to new communication technologies (social networking sites, email, etc.) and figuring out a way in which they engage with their students while maintaining important boundaries.

Alan and Marvin – When machines talk

I’m a bit behind with my commentary on current news. But that’s okay, I think. Some days or weeks ago, the news was that Gordon Brown had issued an official apology to Alan Turing, a genius computer scientist, who was heavily discriminated against and “treated” for being gay until he committed suicide in 1954. Previously, Turing had helped crack the Germans’ encryption code during World War 2.

The story of how Brown’s apology came about is in itself noteworthy, for it came out of an e-petition on the UK government’s website. I must admit, I didn’t even know such thing existed in this country and I will certainly have a look at how e-petitions work. In this case, more than 30,000 signatures were collected and Alan Turing rightfully received his posthumous apology.

Turing Test of human “intelligence”

I guess Alan Turing was mostly known for his Turing Test. The Turing Test is an assessment of a computer’s ability to mimic human intelligence to such perfection that a human judge can no longer tell the difference between the computer and another human. It’s important to mention that the computer and his human counterpart are placed in separate rooms and they both talk to the human judge via some sort of text-based interface so that voice or handwriting don’t influence the verdict. The human judge can ask all sorts of questions through that interface to figure out who’s human and who’s not.

Every so often, some bright minds actually have a real competition based on the Turing Test. The contestant with the best human-intelligence-mimicking computer wins. Don’t know what though. Probably money and three levels up on the geek scale.

Of course, you may ask, “Wait a minute… you call being able to communicate through some text-based interface ‘intelligence’?” The philosophical debate about this is as old as the Turing Test itself and if you expect an authoritative answer in the next few paragraphs, I’m afraid I must disappoint you.

My humble opinion would be that simply being able to carry a nice conversation through some chat program doesn’t mean the computer in the other room is “intelligent”. Intelligence means relating the words and sentences of the conversation to the situation and the context in which it takes place, to everything that happened before and everything that is likely to happen later, and finally, relating the words and sentences to a “being” (don’t make me define “being”…). The conversation only makes sense when you interpret the words and sentences in relation to the unique “being” who expressed them.

The author JD Peters put forward the argument that Turing reduced intelligence to communication without the presence or interference of human bodies because he (subconsciously) wanted to escape the stigma of being homosexual. He may have hoped for a form of interaction and communication that is not distorted by any human “flaws”.

Speaking of talking machines…

While writing this, I remembered that the science fiction literature has already created a number of intelligent machines or robots that would not only ace the Turing Test but also live up to the standards of intelligence that I tried to describe just now (R2D2, etc.).

My most famous example is Marvin, the paranoid android from the Hitchhiker’s Guide to the Galaxy. He’s an incredibly intelligent robot, but he’s also incredibly depressed. Here are just three little quotes…

Marvin: “I am at a rough estimate thirty billion times more intelligent than you. Let me give you an example. Think of a number, any number.”
Zem: “Er, five.”
Marvin: “Wrong. You see?”

“I’d give you advice, but you wouldn’t listen. No one ever does.”

“I think you ought to know I’m feeling very depressed,” Marvin said.

Behind the screens – Broadcasters going digital

videotape

These days, I’m working with a small company that basically helps TV stations to adopt the latest technology. The latest technology in broadcasting is digital and “tapeless”. Gone will be the days when films, shows, and commercials were recorded onto video tapes and ultimately stored as such on endless shelves in the basement (see picture). Tapes will be replaced by computer files and the basement shelves by a few fancy hard drives. Of course, this doesn’t happen over night and not all tapes and basement shelves will disappear, but that’s pretty much the direction the industry is going in.

Will the average TV viewer notice the difference? I doubt it. In fact, when I told friends and family about what I’m up to these days and how TV stations are only now beginning to abandon their video cassettes, most of them were surprised. They would ask, “I thought they’re all already doing it that way?” But they are not. Little does the average TV viewer know about how those films, shows, and commercials on his screen come about. A whole new world opens up for me these days, as I discover what goes on behind the screen.

Ordinary people

I will spare you any intriguing discussion of how to build “tapeless” broadcasting systems or how it may completely change TV stations as organizations. I’m just trying to come to terms with this separation between those who make television and those who watch it. It’s the same with stage performances, for example musicals. As somebody in the audience, I don’t know what’s going on behind the scenes or “how they do it”. Maybe I don’t want to know. Maybe I shouldn’t know, because if I did, the show wouldn’t be the same.

That’s the magic of television (and other media as well) – the separation between “ordinary people” and those who make it. Nick Couldry, a leading media professor, has said that without this separation the whole media world as we know it wouldn’t work. We only trust the news and feel entertained by the latest sitcom because we have no clue how they’ve done it. He’s pretty critical of this because only those who do know how it’s done have the power to do it again and to produce the films, shows, and commercials they like. The ordinary viewer is confined to being an ordinary viewer.

Everything like Youtube

At this point you may like to point out that millions of Youtube users are making their own little clips and broadcast them to the world (if anybody cares to watch it). It’s almost boring to mention citizen journalism and user generated content now (where did web 2.0 go?). And indeed, some of the technologies that TV stations are now implementing seem a bit more democratic because they use the same standards and follow similar concepts like those that an ordinary person would use. For example, the TV station and the ordinary viewer will both have a hard drive with digital video files in very similar formats – only that the TV station’s hard drive is slightly bigger and surrounded by a bunch of other supporting hardware. But the idea is the same.

What this means for television and the media as a whole remains to be seen. One obvious consequence to me is that it will be easier for content to flow from the viewer/user to professional TV stations, at least from a technological point of view. This doesn’t mean that content will actually flow once we take all legal and organizational barriers into account. But at least there’s now the option of my Youtube video being easily transmitted to the BBC.

However, more democratic technology doesn’t mean that the separation between ordinary viewers and media makers will become permeable. A TV station will remain a little world of its own, a mystery to anybody outside of it. Technology is by no means the only way by which the viewer-producer separation is maintained. Professional conduct of people working in the television industry is another important one. So is the geographical split between places of media production (e.g. the news studio) and media consumption (the living room).

Ultimately, I think many viewers don’t care to know how their program gets to them as long as it does and as long as it keeps them happy and nicely amused.

The trouble with Facebook friends

FB-friends

There’s plenty of talk at the moment about the impact of social networking sites on friendship. Bring up the topic at a party or during a coffee break and you will certainly trigger quite a lively discussion. Some will tell you that Facebook is the end of friendship as we know it. Others will proudly report how they reconnect and interact with so many more people than they used to and how that certainly cannot be a bad thing, can it?

I would offer a boring compromise. My close friends are still my close friends and there will always be only a handful of them. Similarly, there will always be a few hundred others I’m just not that close to – whether they now populate my Facebook newsfeed or not. In other words, social networking sites are unlikely to change how important a person is to me, but they will change the way I interact with them. It adds and alters the mix of communication channels.

0=not a friend, 1=friend

A general problem in this discussion whether it’s good or bad to have 583 Facebook friends is this inconspicuous little word “friend”. It’s quite a tricky one. Facebook deals with friends in a binary fashion. 0=not a friend, 1=friend. It might be a cultural thing that Americans see the world that way, but it’s certainly a bit too black and white for the rest of us. Of course, for a critical commentator, it is then quite easy to jump at a friends list with 583 people and announce the end of friendship.

Would it help if Facebook had a more nuanced friends classification scheme? Let’s say, it could range from “most awesome best friend in the world” to “randomly met at a party on my way out”. While this would certainly make it more clear that not all Facebook friends are created equal, it would be terribly unfeasible, as I recently discovered.

Friends on a scale from 1 to 10

I decided to do a bit of social management on my Facebook friends list. My newsfeed had been full of stuff and people I wasn’t interested it, my privacy settings didn’t distinguish between different groups of people, and overall I wanted to have a bit more intimacy with those close friends I care about. So the idea was to create different lists (you can do that) and assign friends to them according to how close I am to them.

This failed. I must admit that rating friends according to some one-dimensional scale is a terrible, useless, and probably quite unethical idea. From a practical point of view, I had to give up after 10 people or so because it took me forever for each of them to decide where to put them. Funnily enough, while I was thinking about them and where to put them, they tended to move back up the scale and I felt the urge to contact them immediately.

So in the end, I ended up creating lists according to how I know the person, for example high school, work, and so on. This turned out to be quite nice because I can now tune in to different social news streams from different stages of my life. I also ended up deleting a few people because – despite all my research attempts – I could not figure out who they are and how I know them.

Helvetica – A great documentary. Also a great font?

Helvetica

Would you believe that it’s possible to produce a fascinating feature-length documentary about a typeface? After watching “Helvetica” by Gary Hustwit at the Institute of Contemporary Arts last night, I know it can be done. This intriguing film features interviews with renowned typographers and designers, discussing the history of the type face, its cultural significance, and aesthetic appeal. By the end of it, I was truly blown away by the implications that something as inconspicuous as a font can have.

The idea to see the movie came about when my Facebook news feed told me that IKEA had just changed their type face from Futura to Verdana, which created a fairly strong reaction in the design community and possibly beyond – to the point that IKEA had to issue a statement that basically reads “accept it and get a life”. By then, the story was all over the mainstream media.

As both the IKEA incident and “Helvetica” demonstrate, typefaces play an important role in our visual culture and daily lives; yet, we hardly ever notice them unless they suddenly change or someone produces a documentary about them. Walking home last night, I found myself scanning the urban landscape for signs and posters designed in Helvetica. They are everywhere.

Modernity and beyond

Two related themes run through most of the documentary. Firstly, Helvetica as a type face strongly reflects the transitions in arts and culture that took place in the last 50 years or so. It came about just after the second world war when modernity really took off. There was a need for a clean and simple font that could be used for any purpose; that was “neutral”, as one of the interviewed designers called it.

Helvetica then spread rapidly across the world, lending itself to numerous corporate identities, magazines, advertisements, and so on. Over time, this triggered two responses. One was that Helvetica became synonymous with globalization, with capitalism spreading recklessly, with standardization, conformity, uniformity – you get the idea. Apart from this political critique, Helvetica simply became a bit boring, given it was everywhere.

Then came what some people call post-modernity. Others, including some of the designers in the film, call it by all sorts of bad names and wish it would have never happened. Typography went from clean and neutral Helvetica-style fonts to almost complete anarchy, to “anything goes”. Suffice the example that one of the interviewed designers carved the words of an album cover into his own skin. In the end, that phase ebbed away a bit as well. And what are we left with today? Who knows…

Medium and message

The other interesting theme in “Helvetica” turns the relationships between a typeface and its surrounding culture on its head. Not only does culture influence the typeface we prefer, but the typeface also shapes this culture. It’s the visual channel through which we express ideas. It’s like a stage for expression. As such, it enables and constrains what we can say. A more straightforward way of saying this would be that there are some things you can say in Helvetica and others you cannot say in Helvetica.

In the documentary, designers disagree over this point. Some say that typefaces are merely functional. They are there to transport the meaning of the words, but other than that they should stay in the background. Helvetica is almost perfect in this regard. The reader doesn’t notice the font and goes straight to the content of what’s written.

Other designers emphasize that a typeface itself communicates something to the reader, which indeed it should. Saying something in Times New Roman is very different than saying the same thing in Comic Sans, for instance. If this was the case, which I believe it is, than we may argue that Helvetica was not only an outcome of modernity, but also an active element in facilitating its spread.