Wiskott, L. (2001). Some Ideas About Organic Computing. Preproc. Organic Computing: Towards Structured Design of Processes, Paderborn, November 23-24, eds. H. Herzel, C. von der Malsburg, W. von Seelen, and R. P. Würtz, pp. 39-42. (this version differs from the original version in that some misspellings have been corrected) Some Ideas about Organic Computing ================================== by Laurenz Wiskott (written September 2001) Well, I have been thinking about a possible position statement for the symposium on organic computing for quite some time now, but I still feel that I cannot provide any particularly qualified text. So be warned that this is a rather naive statement, in the sense that I have no experience in hardware issues of organic computing and that I did not take the time to educate myself by reading related papers. I have some experience in neuroinformatics, but I don't feel like speculating on that, since, as far as I understand, the focus of the symposium is on combining and communicating between these fields rather than pushing forward any single discipline. I also feel that ethical issues should be raised, although I have not much experience on that either. So here are some unqualified speculations and fragments on hardware-software and ethical issues of organic computing in the (partly rather far) future. Maybe they will induce some qualified thoughts on the reader's side. Hardware-Software Issues ------------------------ It appears to me that with fundamentally different and much smaller computing and communication hardware, such as molecules and nanotubes, it will not be feasible or economical anymore to reliably produce a detailed layout of these elements on a chip (or whatever it might be called). Thus it will be important to develop software that can run on unreliable hardware. Some steps in this direction have already been made based on the classical silicon chips. If one thinks along these lines a bit further, one might find that it would be even more efficient if one gave up the idea of producing a chip with a detailed layout altogether. The goal then would be to produce something that has as many computational elements as possible in a statistically reasonable configuration. By this I mean that the different types of computational molecules and nanotubes are placed such that as many molecules as possible are connected in a way that allows communication between them and yields as many computational degrees of freedom as possible. I guess this would most likely be a three-dimensional structure; let's call it a lump in contrast to a chip. That way one might be able to produce enormous computational power at very low costs, and it would not matter at all, if only 1% of it or less could actually be used for computation. The problem, however, would then be to program this 'computational lump' (colump). This, of course, requires an interface. So let's imagine a thousand (or many more) 'wires' that are connected to the periphery of the colump. These are the technical means of communication with the colump (I see, this is not particularly innovative). What would be mechanisms by which the computational properties of the colump could be modified, i.e. how could it be technically programmed? (i) One way might be to physically change the colump, e.g. by applying strong currents to burn away parts of the colump (quite brutal). (ii) It might be possible to grow additional computational elements on the surface or within the colump. This growth could be guided by geometrical constraints or, for example, by activity levels within the colump. (iii) By applying global voltages or light of specific spectra to the colump, one might be able to influence the computational properties of all elements in the colump (quite unspecific). This could be used for resetting or controlling the overall activity or plasticity. (iv) Another way would be to program individual elements to do one or another computation (quite unrealistic, since individual elements are hardly addressable). (v) Finally, the state of individual elements could change based on their computational experience, much like local learning rules in neural networks (feasible and specific, but not well controllable). The hardware as imagined here would not permit classical programming in any way. A colump would be too complex to tell it in detail what to do. Also, there would be no way of downloading a program, since the computational elements would not be well addressable and each colump would be different in any case. Notice that there is no clear distinction between soft- and hardware anymore. So, how could the colump be taught to do something useful, with the technical mechanisms outlined above? I see three possibilities that will have to work together: configuring, self-organization, and teaching. Configuring: The mechanisms (i) to (iv) above would all be different ways of configuring the colump to increase its performance and adapt it to specific needs. (i) to (iii) are all quite unspecific. (iv) is specific but probably applicable only in rare cases. (i) and (ii) would result in permanent configuration changes while (iii) and (iv) could be used for reconfigurations on a fast time scale. In general, I think, configuring the colump could only set the frame within which more refined techniques can be applied. Self-organization: The states of the computing elements themselves are probably the best signals to modify the computational properties of the elements. This is very much in the spirit of neural network learning rules. By self-organization I mean a process by which the colump changes its computational properties without any specifically structured input. Some rather unspecific input will be needed to provide energy (?) or to guide the colump through different phases of the self-organization process. (i) One principle of self-organization could be to reduce redundancy between computing elements by a learning rule that makes state patterns of connected elements statistically independent. This could serve to increase the number of computational degrees of freedom in the colump. (ii) Another principle could be just the opposite. It might be favorable to have local redundancy to make computation more reliable or to improve the effective connectivity. By the latter I mean the following: if a single element is only connected to a few other elements, a group of tightly coupled elements would be connected to many other groups of elements, which would be advantageous. A learning rule could increase local correlations and reduce global statistical dependencies simultaneously. (iii) Self-organization could also be used to improve communication within the colump by establishing long range connectivity and connectivity to the communication 'wires'. For example, a learning rule could favor propagation of states through the colump and by that establish signaling pathways. Teaching: Configuring and self-organization can only prestructure the colump, so that it has advantageous properties for the main teaching phase. Teaching would be technically similar to self-organization with the difference that structured input is used so that the colump can adapt to particular data and environments. This could be controlled by some global techniques of configuring as described above. What types of computation could be realized in this way? Some possible computations are known from neuroinformatics, first of all associative memories, e.g. Hopfield nets. These memories not only provide storage capacity but also some computational power in that they complete stored patterns from incomplete cues. This can also be used for temporal patterns such as speech. One could imagine a little device, with which you can store some acoustic information and then recall the information with incomplete cues such as just the first few words of a sentence or story. This might be useful as an address book, where you just say the name of a person and the device completes the address and phone number. These types of networks could also perform some more complicated computation, such as finding similar patterns, grouping patterns etc., so that the device structures the stored patterns in some reasonable way. Due to the homogeneity of the columps, it would probably be difficult to implement very complex computations which require very different types of computation working together. It would then be necessary to combine columps of different characteristics like in our brain, where different quite homogeneous areas are connected and working together. In summary, the art of 'programming' would be to select the right columps, connect them in the right way, configure them, run a suitable self-organization scheme, and then train the system. These types of columps would probably not be suitable for applications that can be well formalized or where reproducibility is important, like in science. Imaging you do some simulations with a colump and nobody can reproduce your results. Problems also arise from the fact that programs cannot be uploaded from or downloaded onto these columps. There might be no way of making a backup, for instance, or it would be very difficult to delete some information without destroying other information (one might implement forgetting though). Your computer would really become a personal device, which cannot be replaced easily, because it had years of teaching and adaptation to your needs. It might be possible, though, to connect your old computer with your new computer for a couple of months and have the old computer teach your new computer all it needs to know to serve you best. So much to the software(/hardware) issues of organic computing as I see them. If we will be able to actually build and 'program' such computers, there will also arise a number of ethical questions, which I discuss in the following. Ethical issues -------------- I would like to list some ethical questions here and give intuitive and quick answers without any further justification. It may well be that on a second thought I would come to a different conclusion, but I think it is useful to have some answers here, so that everybody can disagree with them. If we are able to build organic computers of high complexity, will we still be able to understand them in detail? No! If we don't understand in detail how a complex organic computer (COC) works, can we assess, how reliably it will work? If yes, how? Only in a statistical sense by means of test runs. If we can not say with certainty, that a COC works reliably, are we allowed to use it to control sensitive processes, like flying a jumbo jet? Yes, if a COC works on average more reliably than a human or a conventional computer. Can we control a COC with a conventional program to make it more reliable? Yes! The least we can do is monitor it. Who takes the responsibility if a COC fails and the jumbo jet crashes? I guess, the question of responsibility would not be much different from the situation of failing programs today, except if one assigns consciousness to the COC, in which case the COC itself might be made responsible; see below. Could a COC have a comparable level of intelligence as we have? Yes! Furthermore, if it can have a comparable level of intelligence it is only a matter of scaling up the COC that it will be more intelligent than we are. However, intelligence is not a scalar quantity, the levels of intelligence will differ greatly with the domain, mathematical intelligence, verbal intelligence, motor intelligence, emotional intelligence etc. In some domains COCs will surpass us in others not. Is consciousness something that emerges naturally with intelligence or could it be quite independent of it? I guess consciousness is not necessarily connected with intelligence but there will be a very strong correlation, so strong that it will be hard to prevent an intelligent COC from having consciousness. Can there be consciousness without self-consciousness? I guess it would be possible if the COC could not sense its own actions. But then, I guess, the COC could not really learn to be intelligent. Thus, intelligence and self-consciousness are probably very closely coupled. Could we decide from outside whether a COC has self-consciousness or not? We will never be able to tell with certainty. That even holds for other humans. But I guess, for communication, the COC will have human language. And then, from a certain level of intelligence onwards, we can't help but have the strong feeling that the COC has self-consciousness. Is there a point at which we have to build in something like emotions into a COC in order to keep it motivated? Emotions might be helpful, but they might also be dangerous to have in a computer or undesirable for ethical reasons; see below. What are emotions in a computer anyhow? If a COC has self-consciousness, do we have a responsibility for it? Can we simply turn it off, or would that be considered unethical? Can it be owned by somebody? Well, I think there are two reasonable points of view: a) If it has self-consciousness, it would be killing if we turned it off and it would be slavery if we owned it. b) It is not the self-consciousness or intelligence that matters but the ability to suffer. The question is, would a COC suffer if it knew that it will be turned off or if it were owned by somebody. This might be a strong reason for not building in something like emotions in a COC. Could a COC take responsibility? Yes! But only if it had self-consciousness and, I guess, only if it had emotions. It must be able to feel guilty or bad about something. Can it be guaranteed that COCs stay loyal to humans? No! What if they don't?