PORTLAND, Ore. -- Researchers in Switzerland are claiming to have developed a new source-location algorithm they say could replace the brute force method of identifying national security and other threats used by the National Security Agency (NSA) and others.
In a paper published Friday (Aug. 10) in the journal Physical Review Letters, researchers at École Polytechnique Fédérale de Lausanne said they have demonstrated that relatively small numbers of network nodes can be used to predict a source location. Their selective method varies from the computationally intensive method used by NSA and others to scour all network nodes for potential threats.
The researchers said the key to determining the relevant subset of network nodes to make more accurate source location is identifying the network structure, the density of its nodes and the number of “information cascades,” which occurs when users on a network observe the actions of others and tend to act accordingly.
The algorithm, called "Sparse Inference," also analyzes the web-like structure of real-world interactions using just a few samples out of the total number of nodes in a complex network. As a result, the researchers claimed that extremely complex connection scenarios can be quickly analyzed to track down their source location.
The researchers acknowledged that they benefited from hindsight in selecting their criteria. Their next goal is to evaluate the robustness of their framework by taking into account inaccuracies while attempting to codify reliable methods of selecting key network nodes.
"Nevertheless,” the researchers claimed, “our results indicate that source localization in large networks--a seemingly impossible task is indeed feasible, both in terms of localization accuracy and computational cost."
The Sparse Inference algorithm sounds too good to be true, since it appears to be a remedy for what the researchers term a "seemingly impossible task." But the heart of the story is that with a careful selection process, all possible nodes of a network do not have to be evaluated in order to make accurate inferences. The real work, however, lies ahead for these researchers, as they try to use Sparse Inference to make predictions about unknown source-locations that actually pan out. For that, they will need to refine a methodology for picking those key nodes and prove that accurate predictions were actually made from their selection. To read a their paper (for free) try:
with some supplemental material here:
An interesting concept: I suppose it is the converse of the old expression "you can't get there from here". Knowing which nodes first encounter a threat means that certain paths in a network are likely and other ones are ruled out. Certain source locations become prime candidates for further investigation.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.