ORLANDO, Fla. "Cross-fertilization" was the watchword here recently as the fourth annual International Conference on Systemics, Cybernetics and Informatics drew church and state university and corporate researchers more closely together than is usual at such events.
The conference, which ran concurrently with the sixth International Conference on Information Systems, Analysis and Synthesis, melded academic research with industrial engineering applications in more than 1,250 papers.
"Most other conferences focus either on scientific discoveries or on industrial applications," said Nagib Callaos, general chair of the conference. "But our goal was to gather academics and engineers together to stimulate fruitful cooperation among the many different disciplines which are all fundamental to cybernetics."
The new approach, according to Callaos, enables cybernetic theorists to enlist engineers early in the design stage an essential requirement, he claimed, in order to instill machine intelligence into future smart information systems.
According to Callaos, virtual engineering paradigms, combined with biologically inspired computing technologies, have a good chance of realizing truly intelligent machines by virtue of smart signal processing and "humanized" information systems. The conference also sponsored tracks on control systems, medicine, biology, machine "psychology" and even art.
"Our first conference was 100 percent academic, but this year we have 25 percent of papers contributed by industrial engineers working at big companies like Sony," said Callaos. "Our goal is to have a 50/50 split between academic and industrial papers in the future."
To support the 50/50 goal, the two conferences have adopted an application-specific format, presenting both applications and their theoretical underpinnings in the same track. Callaos called this a project focus that combines scientific discoveries with the uses to which they will be put.
Next year the project focus will narrow in hopes of nudging the industrial component closer toward the 50 percent goal, when NASA and IBM sponsor a track on decision support.
"The NASA track will provide a focus on a group-decision project that NASA and IBM are pursuing," Callaos said. "They called us saying that no other conference would accommodate their group-decision project. Basically NASA's current decision-support systems are just a way of getting good advice to the project leader, who then makes the decision. But in the future NASA wants a technology that can take social variables into account so that an audible collective decision can be made by a group."
This year's tracks focused on broader, better-known projects. For instance, control system tracks featured papers on fuzzy logic, cellular automata and nonlinear approaches, such as neural networks. Then, application-specific tracks for control systems homed in on robotic navigation, real-time manipulator management and "facial expression" models by which mobile robots can project "feelings." Control applications in transportation systems included railway routing problems, engine diagnosis, neural-based air-traffic control and mobile real-time communications for automobile traffic controllers.
Virtual-engineering tracks covered remote cooperative methodologies for collaborative work on design problems including information database sharing and visualization of information. Another technology, "telereality," combines telepresence with touch feedback so that remote workers can touch and feel virtual prototypes.
Reconfigurable logic also served as virtual prototype testbeds, especially for optimizing circuitry with difficult-to-tune parameters. Software agents were also enlisted as autonomous "helpers" for trying out hard-to-determine parameters, as well as discovering actual design solutions by virtue of interaction among agents with competing goals.
Meanwhile, at the companion International Conference on Information Systems, Analysis and Synthesis, next-generation control architectures kept the heart of autonomous systems beating by virtue of biological analogies with neural networks, fuzzy logic and genetic algorithms. Robotics researchers followed suit by equipping their robots with the ability to generate facial expressions and body movements that project the proper emotions to any humans with whom they interact. And reconfigurable computer fans proposed a "brainway computer" that mimics the architecture of the real brain.
Neural-network theory is making fast progress in control system design. Specifically, classic self-tuning control and model-reference adaptive control (MRAC) can now be based on autotuning neural networks as described in a paper by professors Wei-Der Chang and Jer-Guang Hsieh at the National Sun Yat-Sen University (Kaohsiung, Taiwan), along with Rey-Chue Hwang at the I-Shou University in Kaohsiung, Taiwan.
A self-tuning control system (STC) performs control while it learns to model an unknown manufacturing plant. Hence, its adaptive control is indirect. The STC system begins with estimated parameters of the controlled plant to provide a valid starting point for the controller's parameter adaptation.
An MRAC also needs a valid starting point but separates learning a model from the plant's control with a reference model. The desired output of an MRAC's reference model's subtracted from the real plant's output provides an error signal, which is minimized by a feedback controller that learns.
An adaptation strategy for tuning the learning parameters permits the MRAC to asymptotically track the reference model output converging tracking error to zero. Plus, the MRAC adaptation mechanism continues to make momentary adjustments in its parameters to keep it tracking the reference model no matter what changes perturb the system.
To update the STC and MRAC with meta-learning, the Taiwan team added two new neural parameters to the usual mix to enable the enhanced neurons to autotune their transfer function.
For STC, the new hyperbolic autotuning neural network models a nonlinear plant and control-law computation. For MRAC, it models the plant output and tracks the reference model output despite aging, wear, failures and other long-term perturbations.
In another paper, researchers at the ATR Media Integration & Communications Research Laboratories (Kyoto, Japan) established a nonverbal framework for communicating emotions with the positions of virtual cyberspace bodies. The work is based on analyzing the movements of dancers, using the Labanotation systems invented by choreographer Rudolf Laban, combined with a time-energy-space and weight model. By cataloging the emotions expressed by various body motions, the ATR team hopes to enact the feelings expressed by virtual avatars on-line.
Other papers at the conference concentrated on mimicking the facial muscles of real people on avatars' faces, expressing emotions such as surprise or anger with surprised- or angry-looking avatars.
One hot spot addressed by several papers was how to coordinate autonomous agents in the "dividing and conquering" of problem solving. In particular, University of Calgary researchers Dan Stefanoiu, Mihaela Ulieru and Douglas Norrie presented a fuzzy-modeling approach, which modeled what they called "holonic manufacturing" to manage a modular system of interchangeable parts that self-organize an optimal production environment.
Fuzzy sets provide a mathematical metric for the manufacturing concepts modeled by the team's Multi-Agent System (MAS). By minimizing the vagueness of fuzzy uncertainty, the MAS appropriates the necessary holonic structures to complete each task presented to it. The authors propose the MAS approach as a standardized distributed work flow environment that utilizes self-organization to divide and conquer ad hoc problems autonomously.
The simulation run by the group to prove its concept used several cooperating agents a managing agent, two executive agents, four "worker" agents and 49 fuzzy relations between each possible pair of all seven agents. After minimization, the final holonic behavior first states the goal the manager agent has set, then informs the executive agents. Each of them in turn assigns a task to a worker agent. Subsequently, one executive realizes the need for more resources and activates two remaining worker agents. Finally, the two executives associate their retrieved resources, and the manager glues them together in the final phase to reach the ultimate goal.
The boldest paper presented concerned the "brainway computer" proposed by Jose Hiroki Saito, Alessandro Noriaki Ide and Sandra Abib, researchers at the Universidade Federal de Sao Carlos in Brazil. The team described an artificial brain that utilized reconfigurable hardware to simulate the five major brain functions: reflexes, movements, innate behaviors (such as eating), coordinated sensory-motor functions (such as dribbling a basketball) and mental functions.
The authors likened their approach to a plane vs. a bird. The plane is simpler and doesn't try to mimic the bird's coordinated movements. The two are linked only in that they both fly. Likewise, the reconfigurable brain doesn't have the details of a real brain, but it could solve problems like a human being does.
Just as a person thinks different thoughts with the same neurons, the FPGAs in the brainway architecture are reconfigured to execute different algorithms. The FPGAs can process specific algorithms for various brain functions at different times. When finished with a task, most FPGAs become available for another algorithm (unless they are a part of reflex actions which must remain on-line and available for use at any time).
The authors said their next step is to implement learning so that the brainway computer can "acquire its own algorithms dynamically."