Toto je vyrovnávací paměť vyhledávače Google pro adresu http://cacm.acm.org/magazines/1999/4/7939-analysis-of-unconventional-evolved-electronics/fulltext. Je to snímek stránky, tak jak se zobrazila 14. červenec 2015 15:27:30 GMT.
Aktuální stránka se mezitím mohla změnit. Další informace
Plná verzePouze textová verzeZobrazit zdrojTip: Chcete-li na této stránce rychle najít hledaný výraz, stiskněte klávesy Ctrl+F nebo ⌘-F (Mac) a použijte lištu Najít.
Analysis of unconventional evolved electronics | April 1999 | Communications of the ACM
Sign In

Communications of the ACM

Communications of the ACM

Analysis of unconventional evolved electronics


In recent years, several research groups have demonstrated the potential for artificial evolution to design electronic circuits automatically. An "evolutionary algorithm" is implemented, usually as a computer program, which mimics some aspects of Darwinian evolution: small random variations (mutations) are repeatedly made to one or more candidate circuit designs. Natural selection is replaced by a "fitness" measure of the degree to which a circuit meets the target engineering specification of behavior, size, power consumption, and so on. Mutations resulting in a poor fitness measurement are rejected, whereas those producing an improvement (or at least no deterioration) are allowed to persist and be built upon by further rounds of variation and selection. Commencing with either randomly generated circuit designs, results of previous evolutionary experiments, or an initial hand-designed attempt, repeated cycles of automated evolution can eventually produce circuits that satisfy the specification. Many elaborations to this basic scheme exist, such as allowing recombination of features from two or more individuals in an evolving population of variants.

Electronics design through artificial evolution is becoming practical for some real-world applications [9], especially in domains in which conventional design methodologies struggle. These include automated analog circuit design [7], and design under difficult implementation constraints such as size, power consumption, or even fault tolerance [10]. Here there are applications even for relatively simple evolved circuits.

When setting out to evolve a circuit for some task, there is a fundamental decision to be made: whether the evolutionary process is free to explore any possible design, or whether it is constrained to encourage "sensible" circuits more like those arising from conventional design methods. In many ways, the constrained approach seems like a good idea. The search-space of possible designs through which evolution must navigate is reduced in size to a set of circuits that we can readily understand, and some of which we know should be able to perform the task. Superficially, this suggests an increased chance of success, and a guaranteed utility of the final result, in that we know it is likely to be well-behaved and amenable to analysis using established techniques.

The unconstrained approach, however, is alluring [12]. Evolution is free to explore very unusual designs: circuits with strange structures and intricate dynamical behaviors beyond the scope of conventional design and analysis. In this larger search-space, there is the possibility of better solutions, if we are prepared to set aside our prejudices (based on existing design methods) of how an electronic circuit should be. The unrestricted design-space may also contain more evolvable circuits: those more susceptible to gradual adaptation, through a sequence of small changes, toward meeting the specification. Attempts to aid evolution by restricting it to conventional architectures, usually the products of top-down design, may actually hinder it if these structures are not so easily improved through evolution's more bottom-up activities. Furthermore, if the most promising application domains are the ones problematic for conventional design, then the exploration of new strategies is appropriate. Finally, if we are to learn from or be inspired by the products of artificial evolution, then we must give it the freedom to produce circuits that work in an unexpected way.

Clearly, to learn from an evolved circuit, we need to be able to discern some of its principles of operation. However, even if the goal is merely to evolve a circuit that works (and we don't need to know how), some degree of analysis may still increase its utility as an engineering product. In particular, if bounds on possible long-term changes in the circuit's behavior can be derived, then the circuit can be applied more widely with confidence. The need for extended consistent performance is difficult to accommodate within an evolutionary framework, because usually the tests for fitness measurement of candidate solutions (the bottleneck in the evolutionary process) are as brief as possible. The evolutionary approach can be made more viable if, through analysis, it can be predicted that a circuit will perform adequately in the long term, even though it was never tested for long during its evolution.

There are two components to long-term stability of behavior. First, the circuit must be insensitive to certain variations in its implementation or environment, or able to adapt to them. Examples include thermal drift, noise, and aging effects in semiconductor devices and integrated circuits. Second, it is possible for even simple dynamical systems to display intermittent behavior over long time scales [6]. This is not due to any external fluctuations, but is a property of the system's own dynamics. Circuits can be constructed which will inevitably—though after a long period of normal operation—suddenly and unpredictably change in their qualitative mode of behavior; possibly forever, or perhaps to return to normal operation for another long interval [3]. An evolutionary algorithm, unless using debilitatingly long fitness evaluation tests, would be blind to this pathological behavior, and could present such a circuit as a solution to the engineering specification. Inherently erratic dynamics of this kind can also interact with the time-dependent exogenous influences mentioned previously. If analysis can provide reassurance against long-term sporadic misbehavior, the circuit is rendered more useful.

In critical applications, complex circuits can be embedded within an error-recovering framework [1]. The error-recovery mechanisms themselves can be simpler, and perhaps formally verified. For example, if a failure condition is detected, the circuit responsible could be automatically reset to a safe initial state. The more a circuit's potential failure modes are understood, the more feasible it becomes to construct a resilient system containing them.

When encouraging evolution to explore beyond the scope of conventional design, there can be engineering benefits to some sort of analysis even without a complete understanding of how the circuit works. In the next section, we discuss what "analysis" is, and how it can be done. We then apply these techniques to illuminate an unconventional evolved circuit, showing that one can indeed address some analysis questions even if the circuit is so strange that parts of its inner workings remain unknown.

Back to Top

The Analyst's Toolbox

Analysis of exotic evolved circuits is different to that undertaken as part of orthodox design. At an abstract level, the appropriate tools are sometimes more akin to neuroscience than to electronic engineering. It is especially important to recognize that an evolved system may not have a clear functional decomposition. A functional analysis decomposes the system into semi-independent subsystems with separate roles; the subsystems interact largely through their functions and independently of the details of the mechanisms that accomplish those functions [8]. Systems designed by humans can usually be understood in this way, because of the "divide and conquer" approach universally adopted to tackle complex designs.


If we are to learn from or be inspired by the products of artificial evolution, then we must give it the freedom to produce circuits that work in an unexpected way.


Although an evolved system may have particular functions localized in identifiable subsystems, this is not always so. Dynamic systems theory [2] provides a mathematical framework in which systems can be characterized without a functional decomposition. Hence, what to many people is the essence of understanding—being able to point to parts of the whole and say what function they perform—is not always possible for evolved systems. In this case, more precisely formulated questions regarding the organization of behavior must replace fuzzy notions of "understanding" or "explanation" rooted in functional decomposition. In our case, these questions are centered around the suitability of an evolved circuit for engineering applications. Addressing these questions, such as those regarding long-term dynamics, is what we mean by "analysis."

The successful action of a circuit can be considered as a property of the interface between its inner mechanisms and the external environment [8]: the inner mechanisms have been adapted so that the behavior at the interface satisfies the specification. Observations at the interface (for example, at input and output connections) during normal circuit operation may reveal little about the inner mechanisms, but instead will largely reflect the demands of the specification. Analysis therefore requires internal probing, and/or observation under abnormal conditions, either internal or external.

There are surprisingly many tactics that can be used to piece together an analysis:

  • Probing and abnormal conditions. Abnormal conditions include: manipulation of the input signals, varying temperature or power-supply voltage, replacing parts of the circuit with alternative or non-functional pieces, and injecting externally generated signals at internal points. Monitoring an internal voltage always has some side effect, often placing a mostly reactive load at the probing point. This may have negligible consequences, but potentially perturbs the measurement, or even stops the circuit from working altogether. Probing internal signals of a circuit implemented on a very large-scale integration (VLSI) chip can require routing extra connections to reach the external pins of the device, with a danger of further disrupting the circuit under study.
  • Mathematical techniques, including standard electronics theory, are preferable for their rigor and generality. If a whole unconventional evolved circuit is mathematically intractable, there may still be parts of it that yield.
  • Simulation of a circuit allows rapid and extensive interactive exploration. As we shall see, circuits evolved not in simulation but using real reconfigurable hardware may rely on detailed hardware properties not easily modeled in a simulation. Attempts at simulation can at least help to clarify the extent of this dependence.
  • Synthesis. A circuit can be implemented using alternative electronic devices. For example, a circuit evolved on a single VLSI reconfigurable chip might then be constructed out of a number of hardwired small-scale chips. This provides easy access for probing and manipulation of "internal" signals, and again can clarify what aspects of the hardware are important to the circuit's operation.
  • Power consumption for the most common VLSI technology (CMOS), is related to the rate of change of the internal voltages. After removing any power-supply smoothing capacitors, power consumption can be monitored with high temporal precision while the circuit is exposed to test conditions.
  • Electromagnetic emissions, resulting from rapidly changing electrical signals, can sometimes be detected using a tuned radio receiver. Circuit activity within a chip, which might be difficult to monitor directly, can thus be roughly characterized.
  • Evolutionary history. The mechanism underlying a task-achieving behavior may be more apparent soon after its evolutionary origin, rather than after evolution has refined it to match the specification closely. It may be possible to identify the innovation (perhaps caused by one or more mutations) giving rise to the behavior's origin in an ancestor, and to relate this to the operation of the final circuit.
  • Population diversity. Sometimes there can be several slightly different (but related) forms of high-fitness circuit in an evolutionary population, which can help to reveal the basic mechanisms used.

Although unconventional evolved circuits can seem dauntingly unfamiliar, the analyst is far from powerless.

Back to Top

Case Study

We now present an analysis of probably the most bizarre, mysterious, and unconventional unconstrained evolved circuit yet reported. Our aim is to explore how analysis may be able to abate some of the worries associated with employing very unusual evolved circuits in an engineering application.

The task of the evolved circuit (see Figure 1) was simply to distinguish between 1kHz and 10kHz audio tones, producing a steady high output whenever one tone was present at the input, and a steady low for the other. It was evolved directly on a Xilinx XC6216 field-programmable gate array (FPGA). In simple terms, this VLSI device consists of an uncommitted array of components (or "cells"). Switches made from transistors (multiplexers) control how each component behaves, and how the components are connected to each other. The settings of the switches are controlled by the contents of RAM memory bits distributed throughout the chip; this memory can be rapidly and repeatedly written to by a host microprocessor, causing any of a vast number of possible electronic circuits to be physically instantiated in silicon.


Until the present study, one could only speculate as to the circuit's means of operation, so unusual are its structure and dynamics.


Evolution was allowed to exploit the capabilities of the FPGA as freely as possible. Each candidate design was a setting of the configuration bits for a 10 x 10 corner of the array of cells. To test its fitness, each design was configured onto a real FPGA (no simulation), half-second bursts of 1kHz and 10kHz test stimuli were applied to its input, and a fitness score was automatically assigned according to the degree to which the output approximated the desired discrimination behavior. The behavior of the final circuit (shown in Figure 1) is nearly optimal according to this fitness measure.

The FPGA chip is intended for digital designs: the cells can be configured to perform various Boolean functions. However, being made of real transistors, a cell is really an analog component behaving in continuous time. These analog cells have very high gain (amplification); for most inputs their output rapidly saturates: either fully high or fully low. Consequently, if certain design rules are followed, then the system's behavior can be abstracted to a binary description: a digital logic system. The basic digital design rule is that recurrent connections—paths through the circuit by which a component's output can (indirectly) later affect its own input—must only be allowed to operate at particular discrete instants. This gives the components time to saturate to their extreme (digital) states before their inputs change again. In synchronous design, this is done on the ticking of a global clock, whereas in asynchronous digital design more local coordination is used.

In our experiment, no constraints were placed on the circuit's structure or dynamical behavior. No clock was provided, so evolution was searching the space of continuous-time dynamical systems made from the high-gain analog components normally used as logic gates. Evolution was free to explore the full repertoire of the FPGA's possible behaviors, of which digital systems constitute but a small subset. The components behave on the time scale of nanoseconds, and one of the original motivations of the experiment was to see if the dynamics of the FPGA could be organized to give an orderly behavior on a very different time scale: the periods of the two tones to be distinguished are 1ms and 0.1ms. The experiment (fully described in [12]) was successful, and the resulting circuit of Figure 1 is considerably smaller than would be achieved by conventional methods given the same resources.

Until the present study, one could only speculate as to the circuit's means of operation, so unusual are its structure and dynamics. It was clear that continuous time played an important role. If the circuit was configured onto a different, but nominally identical, FPGA chip (not used during evolution), then its performance was degraded. If the temperature is raised or lowered, then the circuit still works, but the frequencies between which it discriminates are raised or lowered. (Digital circuits usually display unchanged behavior followed by brittle failure on approaching their thermal limits [11].) These initial observations warranted a concerted application of our tactics for unconventional analysis.

In Figure 1, the first analysis step has already been taken. Of the 10 x 10 corner of FPGA cells, only those actively contributing to the behavior are shown. All of the cells not shown can be simultaneously clamped without affecting the output. To clamp a cell, the configuration sent to the FPGA was changed so that the cell's function produced a constant output, and this sourced all four of the cell's output connections. Where a connection is shown passing through a clamped cell, all of the cell's attributes except for this connection could be clamped. The cells shaded grey in the diagram were enigmatic. Although not connected to the rest of the circuit, if they were clamped then the output became less reliable, briefly changing to spurious states occasionally. One of the aims of further analysis was to learn the influence of these "grey cells."

If the input frequency was gradually changed from 1kHz to 10kHz, the output (low at 1kHz) would begin rapidly to alternate between low and high, spending more time high as the frequency was increased, eventually staying steadily high for frequencies near 10kHz. This binary behavior of the output voltage suggested that perhaps part of the system could be understood in digital terms. By temporarily making the assumption that all of the FPGA cells were acting as Boolean logic gates in the normal way, the FPGA configuration could be drawn as the logic circuit diagram of Figure 2.

The logic circuit diagram shows several continuous-time recurrent loops (breaking the digital design rules), so the system's behavior is unlikely to be fully captured by this Boolean level of abstraction. However, it contains many "canalyzing" functions [4], such as AND and OR: functions in which one input can assume a value which makes the other inputs irrelevant. It so happens that whenever our circuit's input is 1, all of the recurrent loops in Parts A and B are broken by a cascade of canalyzing functions. Within 20ns of the input becoming 1, A and B satisfy the digital design rules, and all of their gates deterministically switch to fixed, static logic values, staying constant until the input next goes to 0.

Part C of the circuit is based around a 2:1 multiplexer. When Part B is in the static state, the multiplexer selects the input marked "1" to be its output. This input comes from the multiplexer's own output via an even number of inversions, resulting in no net logic inversion but a time delay of around 9ns. Under certain conditions, it is possible for such a loop to oscillate (at least transiently), but the most stable condition is for it to settle down to a constant logic state. The output of the whole circuit is taken from this loop. As this Part C loop provides the only possibility for circuit activity during a high input, the next step in the analysis was to inspect the output very carefully while applying test inputs.

We had already observed that the output only changes state (high=> low or low=> high) on the falling edge of the input waveform (see Figure 1). Although never required to do so during evolution, we discovered that the output also responds correctly to the width of single high-going pulses. Figure 3 shows a low=> high output transition occurring after a short pulse; further short pulses leave the output high, but a single long pulse will switch it back to the low state. The output assumes the appropriate level within 200ns after the falling edge of the input. The circuit does not respond to the width of low-going pulses, and recognizes a high-going pulse delimited by as little as 200ns of low input at each end of the pulse. The output is perfectly steady at logic 1 or 0, except for a brief oscillation during the 200ns "decision time" that either dies away or results in a state change.

This is astonishing. During the single high-going pulse, we know that parts A and B of the circuit are "reset" to a static state within the first 20ns (the pulse widths are vastly longer: 500ms and 50ms correspond to 1kHz and 10kHz). Our observations at the output show that Part C is also in a static state during the pulse. Yet somehow, within 200ns of the end of the pulse, the circuit "knows" how long it was, despite being completely inactive during it.

This is hard to believe, so we have reinforced this finding through many separate types of observation, and all agree that the circuit is inactive during the pulse. Power consumption returns to quiescent levels during the pulse. Many of the internal signals were (one at a time) routed to an external pin and monitored. Sometimes this probing altered (or destroyed) the circuit's behavior, but we have observed at least one signal from each recurrent loop while the circuit was successfully discriminating pulse-widths, and there was never activity during the pulse. We were concerned that perhaps, because of the way the gates are implemented on the FPGA, it was possible that glitches (very short-duration pulses) were able to circulate in the circuit while our logic-analysis predicts it should be static; possibly these glitches could be so short as to be unobservable when routed to an external pin. Hence, we hand-designed a high-speed "glitch-catching" circuit (basically a flip-flop) as a configuration of two FPGA cells. Glitches sufficiently strong to circulate for tens of microseconds could be expected to trigger the glitch-catcher, but it detected no activity in any of the recurrent loops during an input pulse.


Analysis of the evolved circuits enhances their utility, but requires novel approaches.


We performed a digital simulation of the circuit (using PSPICE), extensively exploring variations in time delays and parasitic capacitances. The simulated circuit never reproduced the FPGA configuration's successful behavior, but did corroborate that the transient time as the circuit enters its static state at the beginning of an input pulse is just a few tens of nanoseconds, in agreement with our experimental measurements of internal FPGA signals, and according with the logic analysis. We then built the circuit out of separate CMOS multiplexer chips, mimicking the way that the gates are actually implemented by multiplexers on the FPGA, and also modeling the relative time delays. Again, this circuit did not work successfully, and—despite our best efforts—never produced any internal activity during an input pulse.

We then went back to find the first circuit during the evolutionary run that responded at all to input frequency: its behavior is shown in Figure 4. During a pulse, the output is steadily low. After the pulse, the output oscillates at one of two different frequencies, depending on how long the preceding pulse was. These oscillations are stable and long-lasting. The differences are minor between this circuit and its immediate evolutionary predecessor (which displays no discrimination, always oscillating at the lower of the two frequencies). In fact, there are no differences at all in the logic circuit diagrams; the changes do things like alter where a cell's function takes an unused input from. This lends further support that circulating glitches are not the key: there was no change to the implementation of the recurrent loops.

We see bistable oscillations reminiscent of Figure 4 at internal nodes of part A of the final circuit. On being released from the canalyzed stable state, the difference in the first 100ns of oscillatory behavior in part A is used by parts B and C to derive a steady output according to the pulse width. There is some initial state of the part A dynamics that is determined by the input pulse length. This initial state does not arise from any circuit activity in the normal sense: the circuit over the entire array of cells was stable and static during the pulse. It is a particular property of the FPGA implementation, as it is not reproduced in simulation or when the circuit is built from separate small chips. One guess is that the change in initial state results from some slow charge/discharge of an unknown parasitic capacitance during the pulse, but we cannot yet be sure.

We understand well how parts B and C use A's initial oscillatory dynamics to derive an orderly output, and have successfully modeled this in simulation. The time delays on the connections from A to B and C are crucial. This explains the influence of the grey cells, which are all immediately adjacent to (or part of) the path of these signals. Varying the time delays in the simulation produces a similar result to interfering with the grey cells. Mostly, the loop of part C serves to maintain a constant and steady output even while the rest of the circuit oscillates, but immediately after an input pulse it has subtle transient dynamics interacting with those of A and B.

Back to Top

Conclusion

We argued that allowing evolution to explore highly unconventional circuits can be advantageous. Analysis of the evolved circuits enhances their utility, but requires novel approaches. There are numerous tactics that can be used to piece together answers to analysis questions even for seemingly impenetrable circuits, and we applied many of these techniques to the most advanced unconventional circuit yet produced. We still do not understand fully how it works: the core of the timing mechanism is a subtle property of the VLSI medium. We have ruled out most possibilities: circuit activity (including glitch-transients and beat-frequencies), metastability [5], and thermal time-constants from self-heating. Whatever this small effect, we understand that it is amplified by alterations in bistable and transient dynamics of oscillatory loops, and in detail how this is used to derive an orderly near-optimal output. Certain peripheral cells fine-tune particularly sensitive time delays. On the key question of long-term consistency of behavior, we know that the entire FPGA circuitry is strongly reset to a deterministic stable logic state for every high half-cycle of the input waveform. Long-term pathologies are therefore highly unlikely, demonstrating that analysis efforts can sometimes remove worries related to the use of highly unconventional circuits in practical applications.

Back to Top

References

1. Anderson, T., Ed. Resilient Computing Systems. Collins, London, 1985.

2. Burton, T.D. Introduction to Dynamic Systems Analysis. McGraw-Hill, 1994.

3. Jefferies, D.J. and Deane, J.H.B. Noise, traps, and snags in iterating systems. In Proceedings of the 1997 European Conference on Circuit Theory and Design (ECCTD'97) Budapest, 1997.

4. Kauffman, S.A. The Origins of Order. Oxford University Press, 1993.

5. Marino, L.R. General theory of metastable operation. IEEE Transactions on Computers C-30, 2 (1981), 107–115.

6. Pomeau, Y., and Manneville, P. Intermittent transition to turbulence in dissipative dynamical systems. Communications in Mathematical Physics 74, (1980), 189–197.

7. Rutenbar, R.A. Analog design automation: Where are we? Where are we going? In Proceedings of the IEEE Custom Integrated Circuits Conference (1993), 13.1.1–13.1.8.

8. Simon, H.A. The Sciences of the Artificial. 3rd ed. MIT Press, 1996.

9. Sipper, M., Mange, D., and Pérez-Uribe, A., Eds. Evolvable Systems: From biology to hardware. In Proceedings of the 2nd International Conference on Evolvable Systems. Volume 1478 of LNCS. Springer-Verlag.

10. Thompson, A. Evolving inherently fault-tolerant systems. In Proceedings of the Institution of Mechanical Engineers, Part I, 211, 365–371.

11. Thompson, A. Temperature in natural and artificial systems. In Husbands, P. and Harvey, I., Eds., Proceedings of the 4th Conference on Artificial Life (ECAL'97), MIT Press, 388–397.

12. Thompson, A. Hardware Evolution: Automatic design of electronic circuits in reconfigurable hardware by artificial evolution. Distinguished dissertation series. Springer-Verlag, 1998.

Back to Top

Authors

Adrian Thompson (adrianth@cogs.susx.ac.uk) is a research fellow with the School of Cognitive and Computing Sciences at the University of Sussex, Brighton, UK.

Paul Layzell (paulla@cogs.susx.ac.uk) is a graduate student with the School of Cognitive and Computing Sciences at the University of Sussex, Brighton, UK.

Back to Top

Footnotes

The following organizations have provided support for our work: British Telecom, EPSRC, Xilinx, Algotronix, and Hewlett-Packard.

Back to Top

Figures

F1Figure 1. The circuit for analysis, after pruning. For simplicity, only the connections between the cells are shown, not their functions, which are also under evolutionary control. Of the signals arriving from their four neighbors, those used as inputs to a cell's function are marked with small boxes. Each of a cell's four outputs (to the four neighbors) can be driven either directly from one of the cell's inputs, or by the output of the cell's function.

F2Figure 2. The logic circuit representation. The numbers in hexagons are rough estimates of time delays (in ns), based on the maximum values specified by the manufacturer. "IOB" refers to the inverting Input/Output Blocks that interface the edges of the reconfigurable array to the physical pins of the chip.

F3Figure 3. The response to a single high-going pulse.

F4Figure 4. Behavior of the first frequency-discriminating ancestor. The upper waveform is the output, and the lower the input; we see the behavior immediately after the falling edge of a single input pulse. Left: long pulse, Right: short pulse. The input to the FPGA is actually delayed ~40ns by an intervening buffer relative to the wave seen here.

Back to top


©1999 ACM  0002-0782/99/0400  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 1999 ACM, Inc.


Read CACM in a free mobile app!
Access the latest issue, plus archived issues and more
ACM Logo
  • ACM CACM apps available for iPad, iPhone and iPod Touch, and Android platforms
  • ACM Digital Library apps available for iOS, Android, and Windows devices
  • Download an app and sign in to it with your ACM Web Account
Find the app for your mobile device
ACM DL Logo