Never before have we been so protected when online. This was one of the points raised by Professor Thomas Rid in his recent lecture about ‘the rise of the machines’ at King’s College, London. Thomas, who specialises in the history of cybercrime, dismissed the idea that privacy is on life support – in fact, he believes the tracking, monitoring and surveillance we have now come to expect make us better protected. Four years ago Stuxnet showed how a piece of malware can affect and even destroy physical infrastructures. If something similar was to breach a power station, a dam or airport, then there could be real, even deadly consequences for anyone in the vicinity – disproving Thomas’ point.
This got me thinking about the different types of technology, the places it looks to be going and the people it will affect.
Healthcare, for example, has experienced numerous developments in the last year, each device more feature-rich and, some would say, invasive than their predecessors. The IDC recently predicted that healthcare organisations will typically have experienced between one and five cyberattacks by the end of 2015.
Even the world of film has clocked on to this concern, with recent films, such as Ex Machina and A Most Violent Year, addressing the potential consequences of a robot and human co-existence.
One man who relies on technology for all aspects of his daily life is Stephen Hawking, whose keyboard uses a form of artificial intelligence to help him communicate. In a recent BBC interview Hawking voiced his concern about advancements in this area and even went so far as to suggest that "the development of full artificial intelligence could spell the end of the human race". Some scientists warn that anything with greater intelligence than ours is unlikely to have human’s best interests at heart – will it improve, ignore or destroy us?
Contradictory to this view, backed also by Bill Gates and Elon Musk (who recently donated $10m to research into the safety of artificial intelligence), Microsoft’s chief researcher, Eric Horvitz, revealed that he thinks the development of AI will not mean that we lose control of certain kinds of human intelligence and instead that we will receive benefits in many aspects of our lives. And Ray Kurzweil, Google’s director of engineering, believes that we will reach ‘the singularity’ - the moment when AI and human intelligence match each other – by as soon as 2045 - that’s only thirty years’ time.
Microsoft’s chief envisioning officer, Dave Coplin, argues that we’ve been striving towards this moment for a very long time and suggests that it is not ‘us vs them’, but instead ‘us plus them’. His perspective acknowledges the fact that machines are not better than us, nor are they a replacement for us – we can do things they can’t do and likewise, they can do things we could only ever dream of.
Developments in technology are not held back by the way we humans are. Technology does not have to wait around for the laborious process of natural selection; there is really very little that’s ‘natural’ about it.
But that is what defines the human race from technology, at least for now. The differences allow us to marvel in each other’s capabilities and, in turn, find ways to learn from each other.
There’s no denying that the risk of a cyberattack will always remain a possibility, but we must make sure that these risks don’t hold us back. It’s easy to find the cynicism in this field, however, instead of trying to prevent developments, should we not be educating consumers on the implications, actions and reactions that come hand in hand with these new technologies?
As long as we don’t begin to believe that technology is too clever, we can use it to both do things better, but also do things differently.