Science Fiction

Artificial Intelligence A bit of fiction

What do we mean by AI?

By the start of the 21st century it was clear that encoding a set of behavioral rules did not constitute intelligence. Despite the increasingly complex decision trees built into expert systems, and the variety of responses that could simulate human interaction, nothing even close to resembling a thinking machine was on the horizon; the claims of prominent researchers in the mid-20th century that the problem was substantially solved were proved optimistic.

It was not until the middle part of the 21st century that meaningful progress was made, enabled by the convergence of three critical disciplines.

The final scaling of planar semiconductor technology reached its zenith, culminating in trillion transistor chips with thousands of processor cores and gigabytes of on-board memory. Coupled with unprecedented data storage capacity, entire simulated 3D worlds were being created across the connected globe, building the fledgling infrastructure that would eventually provide a home to a new form of life: synthetic sentience.

At the same time, neuroscience research penetrated the behavioral relationships between individual neurons, collections of cooperating neural networks, and the emergent cognitive abilities that resulted. Although the picture was incomplete and fractured, the insights that were gained during this renaissance era cannot be understated. Molecular self-assembly was the last piece of the puzzle to fall into place, arriving late on the scene, but becoming an enabling force as scientists and engineers from divergent disciplines experimented with the seemingly infinite possibilities the technology opened before them. Leaping from the broad platform that the microelectronics industry had created in nearly a century of progress, silicon-cell electronics became the scaffold of abstract thought that had eluded scientists for generations.

In hindsight, it seems almost inevitable that man-made life should emerge from this cauldron of ingenuity, but at the time it was anything but a foregone conclusion.

Personal notes, Henrietta Climbsworth, On the Construction of a new World Mind; Dublin, Ireland; August, 2201

One comment

  1. Purik

    I think the best way a quantum copmetur was explained to me involved the question, What’s the difference between a copmetur and a physics experiment? Around the 1950 s and 60 s, it actually was , but nowadays the connection between the two does not seem so obvious.Think about this: every single operation, every computation that a copmetur does is, in essence, a physics experiment. The experiment may involve a piece of ferromagnetic material reacting to a small electromagnetic field, or perhaps a measurement of the path of an optical beam when it hits an intricately designed reflective surface. Of course it could alternatively be a measurement of the position of several pins when pushed up against a punchcard, but we haven’t seen this sort of copmetur for some time.Designing a copmetur (or copmetur chip) inevitably involves generating some sort of physics experiment, the results of which are interpreted as a computation. An important question arises: is it possible to cleverly choose our physics experiment to maximize the efficiency of the computation?The answer, as you may have guessed, is yes. Here’s how it works. Imagine you want to use a copmetur to simulate a physics experiment. In other words, use the electromagnetic or optical systems in your copmetur to calculate the result of a different physical experiment, like the final speed of a needle dropped off the empire state building, or the magnetic response of a piece of iron placed in some field, or the resulting state of an interaction between two molecules. If your copmetur is built properly, it should always be able to perform the calculation faster than it would take for a physicist to perform the physical experiment. Why is this the case? Let’s say the physical experiment is faster. Then, we can (at least in principle) imagine designing a copmetur or chip which was constructed from this physical experiment, and it would have a faster computational speed. In other words, if gravity was strong enough that it caused our empire-state-building-test to be faster than it takes to measure magnetic responses in our copmetur chips, then you could design a copmetur based on dropping objects from the empire state building, and it would be faster than our current copmeturs running windows xp (though this may be the case anyway).Quantum mechanics provides us with an example of a physical experiment which, in principle, can be carried out faster than it can be simulated by modern-day copmeturs. The reason for this is that the quantum world is probabilistic in nature, and so a physical system can be in a superposition of many states at once. At the risk of coining a cute catch-phrase for the layman, it is as if we’re getting both states of Schrodinger’s cat to work for us at the same time. It’s cute because you picture two kittens in a cubicle, typing away, except one of them’s dead.Anyway, back to the physics. A quantum copmetur must be constructed in a significantly different manner than its classical counterparts. The most important point being that your standard copmetur involves a series of if-then statements. It takes a measurement of a given state, then its next action depends on the results of the measurement. In the case of a quantum copmetur, the computation cannot read data from the initial state, because that would inevitably change the state. Instead, it performs a transformation on the state which does not measure it (this is commonly referred to in the literature as a unitary transformation). This transformation is essentially the quantum equivalent of an if-then statement, in that it sends state-A to state-X, but the difference is that the system is in a quantum superposition of states, meaning it’s a combination of states A, B, C, which get sent to a combination of states X, Y, Z the effective result being that we cut out the portion of the experiment where we observe its actual value. If we start off with a system that’s observed to be in state-A, we will acquire a system that’s observed to be in state-X, but we didn’t have to perform the portion of the experiment where we check to see what state it was in to begin with. When the computation requires a large number of steps, this can cut down computation time considerably.To be more concrete, we consider a system of electrons whose spin states can be described as up , down , or a combination of the two, written as |+ > + |- >. and are numbers which we can use to determine the probability of finding an electron in the up or down state, respectively. If we have a system of, say, three electrons, the state would be described by a combination of states like |+++ > + |++ > + |+ + > + |+ – > + a combination of eight possibilities in total. A unitary transformation would send each |xxx > to another combination of states, and when the dust settles, you end up with a new quantum state described by a formula 2 |+++ > + 2 |++- > + Recall that we have not yet observed our system, but we have performed a calculation which will give a different result depending on our input. After a series of these unitary transformations, the final state is measured, and the resulting quantum numbers { final, final, } are found, being the data resulting from the computation.The process of determining these final quantum numbers is far from straightforward. After all, the particles are in a probabilistic quantum state, and the act of measuring the particle will actually force it to choose one of the eight possible states, with probability dependent on the quantum number of the state. Thus, what we need to measure is the probability of each outcome, and this can only be determined by performing the same experiment a large number of times, and keeping track of the number of times each outcome is recorded. This is one of the major fallbacks of a quantum computation, but if the computation is large enough, the increased efficiency is well worth the time cost of many successive experiments.The current hurdle for quantum copmeturs is practicality. Most of the theory is worked out, but it is unclear how to impliment the concept effectively. One major problem is that we have to keep a group of particles in a coherent state in complete isolation, to prevent the possibility of an outside force making an unwanted measurement of the system. Another issue is that we’ve only just recently been able to trap single ions for sufficiently long periods of time to make computations of this nature. Even so, the apparatus capable of doing so is very large (a single ion trap may be about the size of your fist). However, there are other candidates for quantum computation. A quantum chip may take the form of an intricate crystal, and the state of its molecules may provide the necessary data storage. However, performing an non-intrusive measurement on the state of an individual molecule in a crystal is no easy task.There are plenty of other possibilities, since the state of a particle is a fairly open term. In the end, it will most likely come down to the question of which method can be condensed into the smallest physical space.

Comments are closed.