Skip to content

Digest powered by RSS Digest

  • Thnks Fr Th Mmrs: The Rise Of Microblogging, The Death Of Posterity

    A poor life this if, full of care,
    We have no time to stand and star
    - W.H. Davies, Leisure

    A little over a week ago, I closed down all of my social media accounts, with the exception of Twitter, which I locked. The explanation I gave was that, in an age when everyone and their dog is sharing every aspect of their life, being a digital recluse is the new “Internet famous”.

    Since then, some people have criticized my logic – pointing out that if I really wanted to be a digital recluse then I’d close Twitter too. By drawing attention to myself for becoming a semi-hermit, am I not just trying to have my social media cake and eat it too?

    Perhaps. The truth is that there were numerous reasons for me wanting to dial down my use of social media, but presenting numerous arguments in one column is the kiss of death to a columnist. The neo-narcissistic benefits of locking Twitter were what finally made my decision, and so that was the reason I gave. The others would keep.

    This morning, though, Leo Laporte wrote a hugely revealing blog post and, in doing so, artfully proved the misquoted maxim that the medium is the message. In short: Laporte discovered last night that, due to a glitch in Google Buzz, several weeks of his updates had failed to reach either Buzz or Twitter. The kicker? Not one of his tens of thousands of followers had noticed, or cared.

    Leo’s response was a vow to turn his attentions back to his blog – a place where people visit specifically to read about Leo, and where they email in the hundreds if he skips an update. By contrast, he argues, people on Twitter are so busy broadcasting their own updates that they’re unable or unwilling to listen to others’.

    But, while I certainly agree with Leo’s reasoning for abandoning Buzz and going back to macro-blogging, it was another – almost throw-away – line in his post that chimed most loudly with me.

    “I should have been posting [on his blog] all along. Had I been doing so I’d have something to show for it. A record of my life for the last few years at the very least. But I ignored my blog and ran off with the sexy, shiny microblogs.”

    Reading that line, I instantly felt Leo’s pain. When I was researching my most recent book – which mainly focusses on the events of the past three years of my life – I spent several days going back through my blog archives, plus Twitter, Facebook, Flickr and the rest – to remind myself of details and events that may have been missing from my more traditional notes. What I found – or rather didn’t find – shocked me.

    Throughout my earlier archives, I was able to find lengthy, sometimes surprisingly personal, posts – recounting the highs and lows of starting companies, making and losing friends, leaving London, beginning to travel around America and Europe… and countless other published episodes that backed up, and enhanced the contents of my private notebooks. But then, as I clicked forward through the archives to more recent years, something odd happened. At a certain point, the number of posts in each monthly archive dropped off a cliff, particularly where details of my personal life were concerned.

    The reason, of course, was that I’d started to use Twitter for that kind of personal stuff. Unperturbed, I moved my research attentions away from my blog archives and over to my Twitter archives – and that’s when I started to panic: for all the dozens of updates I wrote each month, there was absolutely no substance to any of them.

    “I am learning a lot about pens.” reads one update from last year. What does that even mean? “Ok, that’s quite enough of all this. I’m going out”, reads another. Enough of all what? And where was I going? Of course, the fact that I’m a particularly boring tweeter doesn’t help, but look at anyone’s Twitter account and it’s the same story – 140 characters simply doesn’t give enough depth or breadth to commit events, memories or feelings to the permanent record.

    I’m one of the lucky ones: I hand-write a lot – and I mean a lot – of notes. Recalling personal experiences is what pays my rent so I have dozens of Moleskine books full of memories to look back on. I also have a similar number of published columns and a couple of memoirs to refer to if my recollection gets patchy.

    Others aren’t so fortunate. A decade or so ago, a new generation who would previously have kept diaries instead started to set up blogs. Sure those blogs may have been twee or self-absorbed or clumsily written or emo or just plain boring – isn’t that the joy of a diary? – but they at least required the writer to take the time to process the events of their life, and the attendant emotions they generated – before putting finger to keyboard. The result, in many cases, was a detailed archive of events and memories that they can look back on now and say “that was how I was then”.

    And then along came micro-blogging – and, with a finite amount of time and effort available, the blog generation turned into the Twitter (or Facebook) generation. A million blogs withered and died as their authors stopped taking the time to process their thoughts and switched instead to simply copying and pasting them into the world, 140 meaningless characters at a time. The result: a whole lot of sound and mundanity, signifying nothing.

    To argue for a mass switch back from Tweeting to Livejournaling (or Bloggering, or Movable Typing…) in the interests of the permanent record is as ridiculous as campaigning for everyone to abandon instant messaging and return to letter-writing. The fact is people are busy (or lazy, depending on your view of humanity) and for the vast majority, immediacy will always trump posterity.

    But for those of us who have had reason to look back at the past few years – like me writing my book, or Leo having “woken up to a bad social media dream in terms of the content I’ve put in others’ hands” – the realisation is slightly terrifying: by constantly micro-broadcasting everything, we’ve ended up macro-remembering almost nothing.

Digest powered by RSS Digest

  • Fluorine do-over at Periodic Table of Videos
    For those who have followed Dr. Martyn Poliakoff's Periodic Table of Videos since Xeni's 2008 post, you may recall he didn't have much luck with his first fluorine episode, settling for "just a few bits of glassware and some funny stories." So he backtraced last month and found Eric Hope at the University of Leicester. Dr. Leister is clearly in his element when handling the highly reactive chemical. Watch as cold fluorine gas sets charcoal on fire on contact and burns holes through steel wool:

    For those who just want the reactions without the charming scientist banter, here you go.

Digest powered by RSS Digest

I’ve been on a 1970’s sci-fi reading trip this week, polishing off Robert Silverberg’s Alpha Three collection, Clifford D. Simak’s Shakespeare's Planet, and Larry Niven’s Ringworld (a second time) in only 9 days. But I can’t say I’ve been so over-taken by it that I feel like getting jiggy w/ the Bradbury!


  • Attempt At P ≠ NP Proof Gets Torn Apart Online

    What happens when Twitter and online communities filter scientific discovery ahead of professionals?  As we saw this week, a lot of fuss over a result that will ultimately be discarded into the dustbin of flawed mathematical proofs.

    Computer scientists have long believed that a large number of useful computational problems require an impossible amount of CPU time to solve.  Decades ago it was discovered that a large set of these problems, referred to as “NP”, have so much in common that a fast solution to just one would imply a fast solution to all. But after years of no one discovering such a solution, the standard assumption for mathematicians is that no solution will be found.

    But assumptions aren’t good enough for mathematicians, who prefer proofs, and have long been looking for a proof that settles that NP problems will never yield to a fast solution.  In mathematical terminology, the proof they’re seeking is “P ≠ NP” (that NP problems are not equivalent to “P” problems, the kind that can be solved relatively quickly).  They’ve been seeking this so long that the Clay Mathematics Institute has offered a $1,000, 000 prize to anyone who provides a valid solution.

    This past Sunday HP researcher Vinay Deolalikar became the second person at HP to fall victim to the Internet fameball machine, as he posted his attempt to prove P ≠ NP on his personal website, and sent out an announcement email, which got picked up by the original formulator of the problem, Stephen Cook. Cook forwarded Deolalikar’s email to select mathematician colleagues with the now infamous statement “This appears to be a relatively serious claim to have solved P vs NP.”

    Cooks’ validation email got the paper picked up by research blogs, then HackerNews, then mainstream media. Both armchair and professional math pundits proceeded to tear it apart in comments sections and subsequent blog posts, finding major flaws. Before the age of Twitter, Facebook and social news aggregation , draft research papers on something as complex as P ≠ NP would have gone through a rigorous academic process, which would focus on whether the proof strategy is correct, and whether the apparent errors are easily corrected.

    Deolalikar’s proof draft was public for a day before being pounced on by the online chattering classes. When I emailed him for comment I received the following auto-response, “My email is currently backlogged; please bear with some delays in responding.” No kidding. Despite the “Please note that the final version of the paper is under preparation and will be submitted to journal review” disclaimer, heated discussion into the finer intricacies of the proof’s failures are ongoing.

    Why we should care? The real world implications of P not equaling NP are that certain algorithms will take longer and certain functions in cryptography (which relies on problem solutions being difficult) will be more secure. If P were equal to NP, it would transform mathematics and the way we see the world, as many computational problems previously thought difficult could be solved, see:  Traveling Salesman, Graph Coloring, etc. Most large computer systems need to solve NP problems, and currently make due with less than ideal solutions.

    Or maybe because we’re all a bunch of closet wannabe math nerds?

Digest powered by RSS Digest

Digest powered by RSS Digest