More on Micromedia

As with the previous post on micromedia and de-localization, this one is not aiming to be anything but obvious. If the trends indicated here do not seem uncontroversial, it has gone wrong. The sole topic is an unmistakable occurrence.

The term ‘micromedia’ is comparatively self-explanatory. It refers to Internet-based peer-to-peer communication systems, accessed increasingly through mobile devices. The relevant contrast is with broadcast (or ‘macro-‘) media, where a relatively small number of concentrated hubs distribute standardized content to massive numbers of information consumers. The representative micromedia system and platform is the Twitter + smartphone combination, which serves as the icon for a much broader, and already substantially implemented, techno-cultural transformation.

Besides de-localization, micromedia do several prominent things. They tend to diffuse media content production, as part of a critically significant technological and economic wave that envelops many kinds of disintermediation, with the development of e-publishing as one remarkable instance. By ushering in a new pamphlet age, these innovations support an explosion of ideological diversity (among many other things). No mainstream media denunciation of Neoreaction is complete without noting explicitly that “the Internet” is breeding monsters, as it frays into micromedia opportunities. (In all of this, Bitcoin will be huge.)

No less widely commented upon is the compression of attention spans within the micromedia shock-wave. Fragmentation and tight feedback loops re-work the brain, producing Attention Deficit Disorders that can seem merely pathological. Once again, the twitter-smartphone combo provides the iconic form (right now), splintering discussion into tweets, making interactivity a near-continuous agitation, and perpetually dragging cognition out of geo-social ‘meat-space’ into a flickering text screen. Read a book and then comment upon it? That wavelength has nearly gone. It’s easy to see why this tendency would be decried.

… but, if this isn’t going to stop (and I don’t think it will), then adaptation becomes imperative. We don’t have to like it (yet), but we probably need to learn to like it, if we’re going to get anywhere, or even nowhere (in particular). Whoever learns fastest to function in this sped-out environment has the future in their grasp. The race is on.

Much more on this (I’m guessing confidently) to come …

February 6, 2014admin 10 Comments »
FILED UNDER :Technology

TAGGED WITH : , ,

10 Responses to this entry

  • Ex-pat in Oz Says:

    Micromedia is also driving sentiment and grist for data analysis. Sentiment analysis is already becoming a mainstay of many industries and crowdsourcing is rationalising all kinds of existing processes (Mechanical Turk, etc). Data will drive predictive decision making. This all aids the NR/DE I think, which is fundamentally data-aware. Data-driven decision making CAN go all kinds of wrong for a number of reasons but it is generally because of observer bias (“I can’t/won’t/must not believe my lying eyes!”). The Cathedral will go down this route and NR/DE data science types will shake their heads. As data gets easier to understand, the disconnect will grow. There’s already an effort to share data w/the public: http://sunlightfoundation.com/ Ironically, the “democratisation” of data could play hugely to the advantage of the DE/NR.

    [Reply]

    Posted on February 6th, 2014 at 8:17 pm Reply | Quote
  • Vxxc Says:

    Last Religious Social Revolution was Protestant Reformation. Printing Press essential.
    As an important corollary we can then safely utterly exterminate daytime TV.
    One must learn to find tangible rewards and opportunities up front and as you go.

    [Reply]

    Posted on February 6th, 2014 at 8:38 pm Reply | Quote
  • Candide III Says:

    I violently disagree with the conclusion for two reasons that I am aware of. The first reason is based on physical, the second on philosophical understanding. To see the first reason, consider the progression of media towards ever shorter feedback loop times and attention spans. With books of a century ago, it was years; with Twitter, it’s minutes; with Snapchat, probably tens of seconds. A couple more iterations, and ‘micromedia’ will be hard up against human reaction time. Moreover, the speed at which the brain is capable of processing information remains constant. It is fairly high for visual inputs, but only by virtue of specialization. It is much lower for symbolic input (whether written or spoken) and the increasing rate of bytes in is bound to overwhelm processing capacity (if it hasn’t already). I believe this argument is fairly standard and I’d like to know how you respond to it.

    One possible response is that this does not matter: the brain will learn to cope with the avalanche of symbolic and para-symbolic input and produce some sort of output related to this input. However, since System II thinking is a fairly slow process requiring concentration, the compression of the relevant time scales seems to imply that System II thinking will be increasingly side-lined and marginalized. This brings me to the philosophical reason: without an active System II providing reflection etc., how are we different from animals? Won’t we be sacrificing too much for too little?

    [Reply]

    admin Reply:

    The trend isn’t set solely by the edge, but by the compression of social process towards the edge. Even if we’re touching upon hard biological limits, the fraction of the human species — and even of the more relevant denizens of advanced industrial societies — who have been accelerated up close to this limit still remains relatively small (so mere normalization of today’s extreme technology habits ensures continuing momentum).

    As for the edge — how far away is biomodification / augmentation designed to speed up cognitive performance? The trend has already marked out this target, and market demand is probably already there.

    Finally, on the problem of ‘System II’ and accelerated cognition — that introduces a topic which is far too interesting to rush.

    [Reply]

    Candide III Reply:

    I am much more pessimistic about the possibilities of augmentation than you appear to be. I do not doubt that ‘seamless’ augmentation is theoretically possible — it does not appear to violate the laws of physics — but hardly possible practically at anything like our technological level, since it seems very likely that a two-way connection to a sizeable proportion of cerebral neurons would be required. As for non-seamless augmentation, it appears to put a person using such augmentation into a position analogous to a not-too-smart boss over a team of sharp subordinates much smarter than he is, running a business in a very competitive environment (other bosses having such teams of their own). In the best case, the team may cooperate to keep the boss happy, but he won’t exercise any real control over them. Less charitably, his position can be described as a ‘meat co-processor’. Remember the bra that opens for ‘true love’ and the smartphone that might vibrate in your pocket to tell you to kiss her? I don’t want that.

    [Reply]

    Posted on February 7th, 2014 at 12:59 pm Reply | Quote
  • spandrell Says:

    Alas that which cannot continue must stop;

    Twitter User Growth Decelerating: +6% In Q3 To 231.7 Million Now Vs +10% In Q1
    http://techcrunch.com/2013/10/15/twitter-growth-decelerating/

    Smartphones have already achieved feature saturation. Yes computing power will grow but there’s little else it can be used for given the constraints of the touch interface; they’ve tried voice interface but most people find it dorky to talk a machine, so it failed.

    Tech optimism is fine but I think we’re lacking a breakthrough here.

    [Reply]

    futuremurder Reply:

    The tactile advancement made by the touchscreen will not end there. Integration is the obvious next step (see Google Glass) before an actual physical integration between small processors and the eye. On from there, we will have brain-processor links. Thus, the problematic us/them is overcome and talking to machine will no long seem as akward. It may stop but it will not do so in the near future.

    [Reply]

    spandrell Reply:

    Google Glass interface sucks so badly it’s not even funny.

    We can’t yet make a voice interface which doesn’t lag and ask you to repeat every 2 utterances. Brain-processor links are 100 years away.

    [Reply]

    Posted on February 7th, 2014 at 1:08 pm Reply | Quote
  • orlandu84 Says:

    What happens when everyone talks at the same time? Almost no one listens. Taking Moldbug’s analogy to a logical extreme, the amount of techo chatter being done by everyone will make it increasingly difficult for the Cathedral’s sermon to be heard let alone dominate the conversation. In the age when every phone is a printing press, censors no longer matter for most of the population. Certainly they will try all the more to control speech and shape public opinion, but the sheer volume of communications makes such work increasingly irrelevant beyond a certain radius of control. Whether that control is mostly physical, digital, or social is what the next couple of decades determines.

    [Reply]

    Candide III Reply:

    You forget that the substance of the chatter is becoming progressively more inane. Nobody can censor messages about who’s kissing her now and images of funny cats, but it’s not worth the bother.

    [Reply]

    Posted on February 8th, 2014 at 3:17 am Reply | Quote

Leave a comment