Sections

Services

Information

US United States

The good and bad of social video broadcasting. Social media enters a new phase

Periscope vs Meerkat

Social media has entered what is likely to be a wildly popular new phase with the advent of live video streaming apps Periscope and Meerkat. Both apps allow people to broadcast live video and sound from their phones to other people on Twitter. Periscope allows viewers to interact with the broadcaster in the form of text messages and sending “hearts”, the equivalent of Facebook likes, by tapping on the video.

On the surface, making everyone a “broadcaster” is a grand idea. As the announcement for Periscope proclaimed

What if you could see through the eyes of a protester in Ukraine? Or watch the sunrise from a hot air balloon in Cappadocia?

And it is true, live stream video from anyone around the world, has the capacity to bring the experience of ordinary people around the world, in raw and undiluted form, to anyone on Twitter.

The idea is not necessarily new. Ustream for example has been providing a live streaming service for the past 8 years. The Occupy Wall Street movement showed the potential of live stream video to great effect by broadcasting demonstrations, live action, and interactions with the police. As immersive an experience it was, the problem with Ustream was that it wasn’t integrated with social media platforms and so discovering what was going on and reaching a large audience was difficult.

Twitter recognised this problem and solved it by buying Periscope for a reported $100 million. Twitter and Facebook have both recognised that video is is the next area of growth in social media. Although YouTube has long been held up as the service to beat, it is far less of a social platform than what Periscope is likely to be. From that perspective, Twitter has taken the lead in this space over Google and Facebook.

The problems facing these services however, are substantial. Already, the technical issues are surfacing with difficulty in viewing the livestreams because of the demand. Very few of the livestreams currently play, and simply say Loading before changing to Ended. Periscope, at least, offers the option of watching the recorded version. The cost of providing this service will be substantial and it is certainly not clear how Twitter will be able to monetise the service. From the user’s perspective, broadcasting using the phone’s cellular data is likely to limit the length of the streams and certainly have the mobile network providers seeing a surge in demand.

More challenging however, will be the social problems that a world of live streams. Encouraging millions of people with phones to broadcast what is going on around them to the rest of the world is going to raise enormous privacy issues. Already, videos with titles like “My Girlfriend Taking a Shower” suggest the potential direction this service could take as people race to the bottom in search of viewers. As in many other cases, social media has

Admittedly, these are early days for Periscope and it is possible that they were forced to release the service early because of the launch of rival service Meerkat. Although Meerkat has been reported to have raised another $12 million in funding, it is clear that as a service it is effectively dead when compared to Twitter’s offering. Twitter has already moved to block Meerkat from having access to information from Twitter vital to its social networking service. Meerkat’s only chance of survival is for Facebook to buy it in competition to Twitter.

Despite the problems, video is going to be the future of social networks. Mainstream media have yet another challenge to face with this new service and it is unlikely that they will deal with this any better than they have dealt with the rising challenge of Vine and YouTube celebrities.

As academic Clay Shirky predicted in his book “Cognitive Surplus”, everyone is capable of becoming their own version of a mainstream media producer. As Shirky said however, for every serious project that is created by the world’s cognitive surplus, there will be the cat videos. On Periscope, cat videos abound along with curiously a preoccupation with fridges.

The NY Times declares wearables cause cancer – a fail for journalism or science?

NY Times Wearables Cause Cancer

A columnist at the New York Times has written that he believes that technologies like Apple’s upcoming watch could be as as dangerous as cigarettes and cause cancer. The idea, and the evidence that the New York Times columnist Nick Bilton presented, has been universally panned. Not only by a range of publications like Wired, The Verge and Slate amongst many others, but by the New York Times itself. Margaret Sullivan, the New York Times Public Editor, has called foul over the article, pointing out that the tech columnist Nick Bilton shouldn’t have been commenting on Science, which he clearly knew nothing about and the editor should not have used a headline that was constructed as “click bait”.

The article appeared in the Fashion and Styles section of the online paper and the title comparing wearables to cigarettes was eventually changed to the less incendiary “The Health Concerns in Wearable Tech”. The editor of the Fashion section subsequently responded to criticism and posted an editor’s note that basically retracted everything said in the article.

To be clear, the article was pretty poor and reflected badly on the abilities of a columnist whose abilities as a writer have been questioned before.

What was interesting however, was the way Bilton’s critics picked apart his arguments. Much was made, for example, about how Bilton framed his argument as relying on scientific evidence when in reality he was taking one or two inconclusive reports out of a expanse of other un-supportive research to try and prove his point. Bilton also was called out for relying on the opinion of someone who was not a scientist but was rather an “alternative practitioner” called Joseph Mercola. In the past, Mercola has advocated that almost everything can cause cancer or other harm, including mammography, fluoridation, amalgam fillings and even sunscreen.

In many ways, Bilton’s arguments followed a very similar line to those espoused by others claiming that vaccinations cause harm and that climate change has no scientific basis.

The truth is, we really don’t know at this time whether there are any long term harmful effects of using mobile phones, let alone wearables. The fact that the mere suggestion that there is a danger caused the outcry that it did, says more about the anti-science triggers encoded in the article than the actual debate about whether the claim was actually true. In the end, it really didn’t matter what Bilton was arguing, just that he was abusing science and that put him in a particular camp of people who do this for a living.

What perhaps this story points to is the difficulty in trying to distill scientific research into a form that is understandable and can be communicated to the public. This is, in an of itself, a difficult task because a great deal of fidelity is lost in the simplification. Using this simplified model to make an argument however is an almost impossible task. This is especially the case when amateurs confuse the idea that referencing and citation are the only hallmarks of the scientific process.

The fault in misunderstanding could also partially lie with the way science itself is distilled in the form of papers in journals. As Wired pointed out, the use of hedge terms like “possibly”, “inconclusive” and “needs more research” are just that, filler terms that scientists add to suggest that they really don’t know all of the answers but if someone cares to fund them, they will do more research to find out. They simply mean that we don’t know, not that there is any evidence to suggest that it is actually possible.

Ultimately, scientists themselves should be doing a better job at making research accessible to the public so that these misunderstandings don’t occur. They should be in the best position to know what is and isn’t known and all that is needed is for this to be put in a form that the public understands. If that was done, we wouldn’t need technology columnists, even those who work for the New York Times, failing to do it for them.

Ex Machina is less a movie about the nature of AI and more about the fantasies of men

Ex Machina Film

The recent South by Southwest festival in Austin Texas this year featured the US premiere of a new movie about artificial intelligence called Ex Machina. The movie, directed by Alex Garland, has received mostly positive reviews, principally for its attempt, in the words of the reviewers, to explore issues about the nature of artificial intelligence and ultimately, its dangers to humanity.

Unfortunately, the film sacrifices real intelligent argument for drama and ultimately the danger from AI manifests itself in a fairly predictable way. The human creation, a sexy, female robot called Ava (Alicia Vikander), ultimately becomes truly autonomous and thinks for itself. In perhaps a sign of true intelligence, Ava decides that life is going to be better without her creepy male creator, tech CEO Nathan (Oscar Isaac) and takes matters into her own cybernetic hands.

The final player in the three-hander is a programmer called Caleb (Domhnall Gleeson), who is brought in to do a “Turing Test” by Nathan to ultimately test how good Ava is at being intelligent. This apparently comes with a challenge for Nathan in that Ava begins flirting with him, rapidly diminishing his ability to think and being prepared to believe anything after a while.

Whilst the film may be entertaining, it builds on stereotype and cliche concerning AI. One of these cliches is the idea that artificial intelligences as learning machines can become “superintelligent” and outstrip the intellectual capacity of their human creators. This theme was explored in the movie “Her” in which computer operating systems collectively became so intelligent that humans would no longer be able to understand even their most rudimentary thoughts. In Her, the AIs were benign and essentially left humanity to get on with their lives whilst they explored their sentience.

However, the fear that is often espoused, is that as in the movie “Terminator”, the machines will turn against humanity and take control. Unfortunately, this idea has been given more credence by prominent technologists and scientists discussing in public, their fears of the dangers of AI. Former Microsoft CEO Bill Gates, Tesla CEO Elon Musk and Stephen Hawkins have all voiced their opinions that we should fear the possible dangers of uncontrolled AI.

A group calling itself the Future of Life Institute – comprised of academics, actors and company CEOs – has formed with the goal to “mitigate existential risks facing humanity” and is “currently focusing on potential risks from the development of human-level artificial intelligence.”

Of course, what this translates into is a pitch for funding research into AI, which in and of itself is not necessarily a bad thing. It seems a shame, however, that it has been presented with the trappings of a semi-religious cult. The actual output of the group in terms of recommendations for research has been more understated. In particular, its published paper on areas of future research noted that “there was overall scepticism about the prospect of an intelligence explosion”. Intelligence explosion is the phenomenon of machines teaching themselves to become super-intelligent.

The nuance here has largely been ignored, with the press latching onto the threat of a Terminator-like uprising of “killer robots”.

Ex Machina is more of a film about a male fantasy of having a perfect, and subservient, sex robot than it really is about an “existential” threat. In that respect, the AI aspect is incidental and the story is similar to that of the horror film “The Stepford Wives”, in which a town’s women are turned into automata who are mindless and totally submissive; the ultimate perfect housewife. The drama in Ex Machina comes from the ultimate male fear: a woman who fights back and asserts her independence.

The Pew Research Centre released a report in 2014 in which it asked a range of technologists about their predictions of the future of robotics. Only 48% of those asked felt that there would be significant displacement of blue collar and white collar jobs in the next decade. This is about the same time frame in which we will see an increase in cars becoming more autonomous.

We are still a very long way off having the capabilities of artificial intelligence depicted in Ex Machina. On a more prosaic level, we don’t even have the technologies to support the hardware that would be needed to provide that level of intelligence in a robot form. With today’s technologies, a phone battery barely lasts a day. If Ex Machina were real, Ava would have been constrained more by needing to be constantly plugged in and recharging from a power source than by the limits of her intelligence.