Conscious Page 7
This fundamental difference in types (not definitions) of intelligence is possibly more interesting than problematic. We won't digress in this piece to consider the social implications of 'real' machine intelligence (such as the ethics of 'robot rights', for example) or the different models of intelligence that might allow 'real' intelligence to be created (neural complexity, panpsychism, the biological dimension, spirituality, etc.) That's an argument in itself. But for now, anything that's exciting or scary about the TS applies broadly the same if we're dealing with something that really is intelligent or just appears to be. And remember, the TS is primarily a question of evolution: intelligence is a worthy but related secondary issue.
In fact, this might be a good point to dispel another myth in relation to the TS. It has nothing whatsoever to do with the circuit complexity of any given processor or any collection of them. The point at which a computer's neural mass (presumably measured in number of logic gates) reaches that of the human brain is often portrayed as some significant point in AI development – sometimes even as the TS itself – but this is nonsense. The almost endless reasons why this doesn't make sense include these deal-breakers:
The structure and speed of a computer device (including any network of them) is utterly unlike the brain. Each brain neuron is directly connected to many, many thousands of others. Signals, however, move fairly slowly – just a few metres per second. By comparison, computer/network nodes (gates, switches, etc.) generally connect to just a handful of neighbours but signals travel at electron speed. In graph terms, the brain is dense but slow while a computer is sparse but fast.
Whilst it's often admitted that we don't know what algorithms the brain runs so it would be difficult to replicate them, the truth, in fact, is that we don't really know if it runs algorithms at all – in any sense that we would recognise. The conventional notion of software running on hardware may have no equivalent in the brain. Its structure and operation may be inextricably linked in a way that we can't (yet) recreate in a machine. (There may be a biological foundation, for example.) Its hardware and software (perhaps even its power) may be inseparable. We may eventually understand how this works and seek to design machines on this basis but we're not even close now.
So a computer is very unlike the brain; the TS can't be measured or counted. It's what happens that's important.
His reading was interrupted by Jill poking her head around the door.
“Did you turn the cooker off?” she asked irritably.
“No. Why would I?”
“Well, someone – something – did! This is getting stupid,” she frowned. “Dinner’s going to be late,” she grumbled as she withdrew.
Bob resumed his reading …
The real question we have to somehow get to grips with is how we might expect these highly-evolved machines to behave. This seems to be the focus of most of the recent scare stories. Again, intelligence may be something we have to consider here but it isn't the driver. A new race of machines (in fact, probably many different species of them), superior to humans in every physical and mental way, could clearly be considered a threat. But it's not obvious, in this respect, that an 'intelligent' machine would be any more worrying than one that wasn't – that 'strong, fast and clever' is more dangerous than 'strong, fast and thick' – because, for example, we know the human (and animal) world often doesn't work that way. And all of this is made more difficult by having never really worked out what these terms mean in the first place.
But an obvious key point here is whether we're going to remain in control of what these machines do. The implied concern behind a lot of the AI-related headlines is that we won't. If, over the long-term and beyond the TS (and the notion of 'beyond' may be why 'singularity' isn't such a great term), machines only ever do what we tell them to, then humans remain responsible for whatever use and abuse may occur. The machines are effectively extensions of ourselves (tools) so, even accepting that legislation often struggles to keep pace with developments in technology, we might hope that 'conventional' human moral, ethical and legal codes can be eventually applied (not to the actual machines, of course – that wouldn't make sense – but to the way we use them). Whether these human social codes, in themselves, are fit for purpose is way out of the scope of this piece.
A much more serious situation arises if, as is generally expected or feared, machines evolve to the point of (at least appearing) to think for themselves, either by the autonomous extension of 'artificial' intelligence to new domains or the acquisition of 'real' intelligence. At this point, we have to genuinely consider the rules or framework by which such a machine might 'think' and therefore 'behave' and, if what's gone before was difficult, this takes us into entirely new, deeper, uncharted and murkier depths ...
Frankly, what axioms do we have for dealing with this? Why do we even think the way we do? OK, we have many models but they range from hard neuroscience, through different psychological theories, including and leading to concepts of the soul – and they're intersected by various arguments for and against pre-determinism and free-will. C.S. Lewis, for example, describes the ‘Moral Law’ binding humanity (and there are more scientific versions available for the spiritually faint-hearted) but can any of this be a foundation for predicting the way machines will think and behave?
On the whole, humans try their best to apply logic to an, albeit difficult to define, moral foundation. We're not particularly good at this in practice. First of all, few of us really know what this starting point is and we have even less idea where it comes from. Second, we're not expert logicians in following an optimal line: we make mistakes. Third, real life usually gets in the way of the logic and a form of 'needs must' thinking overrides clinical reasoning. Fourth, we often knowingly deviate from what's clearly the right course of action because we're all – to a greater or lesser extent – flawed, which might for some even include not wanting to try in the first place. (Obviously there's a sense of fundamental human 'goodness' in this model, which isn't universally accepted.) However, in principle at least, we have a sense of direction through all of this. We either make some attempt to stay on course or we don't.
So the question is can or will this sense of 'moral direction' be instilled in – and remain with – artificially-programmed intelligent machines and/or will it be evident in machines achieving their own sentience? In particular, what would be their initial moral code? This seems like a very important question because we might reasonably assume that the machines' logic in putting the (moral) code into practice would be impeccable and not prone to diversion as it tends to be with us. But does the question even make sense? (Let's be utterly clear about this - Asimov's Laws of Robotics, in this context, are useless: simple fiction and already frequently violated in the real world.) What might highly-evolved, super-powerful (possibly intelligent) machines regard as their 'purpose', their raison d'être? Would they serve, tolerate, use or replace humanity?
And we just don't know. We can define the question in as many ways as we like and analyse it every which way we can. But we just can't say. We can easily pluck unsubstantiated opinions out of the air and defend them with as much energy as we wish but there's really nothing to go on. Just as we can only speculate on what would motivate an alien race from a distant planet, it's anyone's guess as to what might drive a new technological species that either we've created or has evolved by itself. (This is all assuming we've surrendered control of the process by then.) In this respect at least, some amount of concern in relation to the TS seems justified – even if only because we can't be certain. It's taken us a long time to get to this position of doubt but concern relating to uncertainly isn't irrational.
Looking to tie this up somehow, if it's difficult to say whether we can ultimately coexist with intelligent robots then is transhumanism our insurance policy? As we put more human features into machines, will we take on more of theirs? Is the future not competition between 'natural' and 'technological' species but their merging? Cyborgs?
Some futurologists see transhumanism as a fairly inevitable destiny but does it really help?
Well, maybe. But it's a maybe with the same problems as the uncertainties of the TS itself. Because it still depends on how the 'pure' machines will see the world. If ordinary humans are tolerated then probably so will 'enhanced' humans be. If not, then this level of improvement still might not be enough if machine logic takes a ruthless line. Again, the standard futurologist's view of transhumanism implies we'll still have some control but it remains to be seen if that's the case.
And finally, possibly even optimistically, a word of caution ... If this potential elimination of humanity by a robot master race (repeated across equivalent worlds) might seem like an answer to the Fermi Paradox, we might have to think again. (Another version of the 'civilisations naturally create their own destruction before they can travel far enough' theory.) Even if the 'developer race' was lost in each and every case across the universe, why aren't the machines talking to each other? (Or are they?)
And there we are ... already at the end of the piece and we don't know. Many people have written a lot more, and a lot less, and they claim to know but they don't. There are just too many unknowns and we'll have to wait and see. Should we be scared by the TS or not? Well, in the sense that it's uncertain and unpredictable, yes. But lots of things in life are uncertain and unpredictable. For some of us, death itself is uncertain and unpredictable.
So, is the TS really a 'singularity'? In a strictly Gödelian sense, it might be. Probably, we'll know when we get there – but not before!
*
Bob smiled. Although not really one for all this ‘arm-waving futurology’ business, as Jenny often described it, he never failed to be impressed by Andy’s essential grasp of the raw ingredients of a subject some distance from his own. True, he had been applying his philosophy to various aspects of science for some time now but his instinctive ability to get to the core of complex issues – with no relevant educational background – was not to be dismissed lightly. Maybe, in fact, having the subject-independence he had was actually an advantage – allowing him to strip away peripheral material without prejudice or bias, and then to abstract and distil. Anyway, he was always a good read!
Nor was it difficult to see Aisha’s influence on the piece. There were sections he was fairly sure would not have been that way in the first draft. He could almost picture the two of them arguing over concepts – even exact wording – in several places, before coming to an uneasy compromise. It was interesting that, despite the two of them coming from almost diametrically opposite fundamental points of view, they had managed to agree to a large extent on the conclusions – even if those conclusions were that there were no conclusions!
Anyway, thought Bob. That was enough of that. It was time to turn his thoughts to the ‘nuts and bolts and numbers’ of real networks. As he turned to his main desk display, it went blank for a few seconds for no particular reason, then reappeared.
PHASE TWO: DIAGNOSIS
Chapter 6: Many Failures
Tuesday, the day of Bob’s departure did not begin so well. Interrupting the array of, generally entertaining, stories of RFS being served up on different breakfast TV, radio and Internet channels, appeared to be the first British fatality that could be directly attributed. A pensioner had been killed at a level crossing late the previous evening. One of the gates had inexplicably raised on his side of the tracks. Ignoring the other – it was thought, probably correct – signals, he had driven part-way over without realising that the exit was still blocked on the other side. He had attempted to reverse but became stuck on the tracks and did not quite clear the area in time. The train had only caught the car a glancing blow but that had been enough.
The media were clearly struggling with how to cover RFS – or even to decide when an event was or was not part of the phenomenon. As serious as such events as this were, there was much potential for amusement as well. The UK tragedy was followed by a short piece from Las Vegas. Over the weekend, a casino’s centrepiece slot machine had apparently paid out a million-dollar jackpot twice in quick succession. Despite the American reporter’s seemingly reasonable observation that ‘Surely, these things will duplicate by chance occasionally?’, the owner was furious. He had ‘paid good money for this particular model’ and these random fall-outs in customers’ favour ‘totally shouldn’t happen!’ In New Zealand, an automatic farm security system had reported a remote herd of RFID-tagged cows missing; an entire rural police force had been activated in response. The cows, it emerged, had not gone anywhere.
Bob slipped out of the door quietly to find the taxi waiting at the end of the drive. His wife was still in bed asleep. He was dressed casually as always: people paid for his expertise – not his dress-sense. He lifted, rather than pulled, his case round towards the boot so as not to make a noise on the gravel. A cold, fine rain – almost a mist – made him shiver as he slipped into the passenger seat. The taxi was too warm inside and the driver not much for conversation; Bob nodded drowsily, the soft tunes on the radio seducing him into a conviction that he was awake too early, as they cruised towards the airport. The journey seemed unusually short.
An automatic door at the entrance to the departures area was refusing to open as he rolled his case towards it. A small queue had formed as a maintenance man levered the two parts apart with a steel rod that looked a lot as if it could have been designed for the very purpose. Inside was relative normality although the odd information board reset itself from time to time and the check-ins were having the occasional issue with the network systems. On the whole, the place was functioning without too much difficulty although traces of RFS were evident everywhere in the margins. There were some delays, including to his flight, but everyone was getting by. Normally, these malfunctions would have passed without much more than mild exasperation but, knowing (or at least, suspecting) them to be part of a wider phenomenon, cast it all in a different light. Bob turned it all over in his head for the hundredth time, but to no avail. He still did not know what it was and it still did not make sense.
*
The next nine days were a routine Bob knew well. Planes, taxis, offices, restaurants, more taxis and hotels. A blur of departure lounges, roads, glass buildings, too much food and over-soft beds. Hattie and he followed a continentally similar, but locally different, path; meeting as and where needed and usually parting immediately afterwards to take their separate routes to the next destination. There was no set pattern for this or how it worked. The couriers knew what was needed, where and when – and worked it all out, and were well rewarded for it, including the careful treatment of their delicate and precious cargo. But, all the time, RFS was getting visibly worse.
*
There were two places to visit in Paris. The problem at the first was trivial: a simple set-up issue. There were probably people in the building at that very moment, Bob thought with some amusement, who could have fixed it. They had not needed him or Hattie. Nevertheless, he had switched everything on and made a token attempt at taking some readings before making the necessary changes to configuration files. Bob felt he had to make it look as if they were getting their money’s worth; they were going to pay handsomely for the over-the-top attention so he, at least, was happy.
There was even time for some Christmas shopping in an up-market French shopping centre. In the space of an hour and a half, he was able to eat a light lunch and buy the sort of presents for Jill, Chris, Heather and Ben that would be difficult to find at home: this might keep people happy when he returned, he chuckled to himself. But, even here, RFS was very evident: lights, tills, interactive displays, parking meters, the public address system, even the occasional alarm or security system; all were proving problematic from time to time. Most things were taking longer than they should and public bemusement was mounting.
He was able to make the second technical visit, as planned, the same afternoon. This next network proved to be something more of a challenge –
although hardly a difficult one. It eventually transpired that there were two separate problems – at different network levels, which were each managing to obscure the exact effect of the other. With a reasonable amount of networking knowhow and experience and a bit more time, they could have been diagnosed in a more conventional manner but Hattie’s ‘holistic’ approach made short work of it and the company cheerfully signed off the completion document without question. Over the whole day, Bob mused as his taxi swept him to his overnight hotel, RFS had caused him a fair few more problems than these two French networks had.
But Darmstadt was different.
Bob’s appointment was at HGMS-Ion, a research facility on the outskirts of the town. He flew into Frankfurt airport, was met by a suited driver and driven by private car – by autobahn save for the last few miles – directly to the front gate, to be met at the security desk by Ulrich Bär, the contact with whom he had arranged most of the visit. He had decided, in advance, that this one was going to be slightly ‘odd’.