How Long Before Superintelligence?

Por Roberto Blum

Este artículo examina qué tan probable podría ser que desarrollemos la inteligencia artificial sobrehumana en el primer tercio de este siglo.

Visualiza las diversas estimaciones del poder de procesamiento del cerebro humano; cuánto tiempo pasará hasta que logre un desempeño similar al del hardware de una computadora; qué tan difícil será para la neurociencia dar explicaciones sobre el funcionamiento de los cerebros y cuánto tiempo esperaremos para que la superinteligencia se desarrolle una vez que haya inteligencia artificial a nivel humano.

Superintelligence

Los invitamos a leer el artículo How Long Before Superintelligence? por Nick Bostrom:


Definition of "superintelligence"

By a "superintelligence" we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.

Entities such as companies or the scientific community are not superintelligences according to this definition. Although they can perform a number of tasks of which no individual human is capable, they are not intellects and there are many fields in which they perform much worse than a human brain - for example, you can't have real-time conversation with "the scientific community".


Moore's law and present supercomputers

Moore's law <A> states that processor speed doubles every eighteen months. The doubling time used to be two years, but that changed about fifteen years ago. The most recent data points indicate a doubling time as short as twelve months. This would mean that there will be a thousand-fold increase in computational power in ten years. Moore's law is what chip manufacturers rely on when they decide what sort of chip to develop in order to remain competitive.

If we estimate the computational capacity of the human brain, and allow ourselves to extrapolate available processor speed according to Moore's law (whether doing so is permissible will be discussed shortly), we can calculate how long it will take before computers have sufficient raw power to match a human intellect.

The fastest supercomputer today (December 1997) is 1.5 Terraops, 1.5*10^12 ops. There is a project that aims to extract 10 Terraops from the Internet by having a hundred thousand volunteers install a screen saver on their computers that would allow a central computer to delegate some computational tasks to them. This (so-called metacomputing) approach works best for tasks that are very easy to parallelize, such as doing an exhaustive journey though search space in attempting to break a code. With better bandwidth connections in the future (e.g. optical fibers), large-scale metacomputing will work even better than today. Brain simulations should by their nature be relatively easy to parallelize, so maybe huge brain simulations distributed over the Internet could be a feasible alternative in the future. We shall however disregard this possibility for present purposes and regard the 1.5 Tops machine as the best we can do today. The potential of metacomputing can be factored into our prognosis by viewing it as an additional reason to believe that available computing power will continue to grow as Moore's law predicts.


Even without any technology improvement we can do somewhat better than, for example by doubling the number of chips that we put in the box. A 3 Tops computer has been ordered by the US government to be used in testing and developing the nation's stock pile of nuclear weapons. However, considering that the cost of this machine is $94,000,000, it is clear that even massive extra funding would only yield a very modest increase in computing power in the short term.

How good grounds are there to believe that Moore's law will continue to hold in the future? It is clear that sooner or later it must fail. There are physical limitations on the density with which matter can store and process information. The Bekenstein bound gives an upper limit on the amount of information that can be contained within any given volume using a given amount of energy. Since space colonization would allow at most a polynomial (~t^3) expansion rate (assuming expansion rate is bounded by the speed of light), the exponential increase of available computational power cannot be continued indefinitely, unless new physics is forthcoming.

In my opinion, Moore's law loses its credibility long before we reach absolute physical limits. It probably hasn't got much predictive power beyond, say, the next fifteen years. That is not to say that processor speed will not continue to double every twelve or eighteen months after 2012; only that we cannot use Moore's law to argue that it will. Instead, if we want to make predictions beyond that date, we will have to look directly at what is physically feasible. That will presumably also mean that we have to contend ourselves with a greater uncertainty interval along the time axis. Physical feasibility studies tell us, at best, what will happen given that people want it to happen; but even if we assume that the demand is there, it will still not tell us when it will happen.


In about the year 2007 we will have reached the physical limit of present silicon technology. Moore's law, however, has survived several technological phase transitions before, from relays to vacuum tubes to transistors to integrated circuits to Very Large Scale Integrated circuits (VLSI). There is no reason why present VLSI designs on two-dimensional silicon wafers should be the last word in chip technology. Several ways to overcome the limits of the present technology have been proposed and are being developed.

In the near future, it might for example be possible to use phase shift masks to push the minimum circuit-line width on a microchip down to as little as 0.13 micrometer, even while remaining in the optical range with the lithographic irradiation. Leaving the optical range, we could use x-rays or at least extreme ultraviolet ("EUV", also called "soft x-rays") to attain still finer precision. Failing this, it should be feasible to use electron bean writing, although this production method would be slow and hence expensive. A compromise would be to write some of the gates with an electron beam, especially at bottlenecks where speed is absolutely crucial, and use optical or EUV to write the other elements of the chip.

We can also increase the power of a chip by using more layers, a technique that has only recently been mastered, and by making bigger wafers (up to 300 mm should not be a problem). Drastically bigger chips could be manufactured if there were some error tolerance. Tolerance to error could be obtained by using evolvable hardware (de Garis 1997).


It is also possible to push the physical limits on how small the transistors can be made by switching to new materials, such as Gallium Arsenide. Quantum transistors are presently being developed, promising a major step forward for circuitry where high switching speed or low energy consumption is essential.

Because of the highly parallel nature of brain-like computations, it should also be possible to use a highly parallel architecture, in which case it will suffice to produce a great number of moderately fast processors, and have then have them connected. You could either put them in the same box which would give you a bus-based multiprocessor (which are quite popular today) or you could link them up to a high-bandwidth local-area network (an option that will be increasingly attractive as the performance of standard networking technology improves). < B >

These are all things that are being developed today. Massive funding is pumped into these technologies <C>. Although the difficulties can appear staggering to a person working in the field, who is constantly focused on the immediate problems, it is fair to say that there is widespread optimism among the experts that the prospects are good that computers will continue to grow more powerful for the foreseeable future.


Lea aquí el artículo completo.