Does any one ever consider the ethics and morality of this business. Huge amounts of effort and money poured into making faster and faster trades so that organisations with huge capital resources can extract money from the rest of us by making a trade a few uS faster. This isn't useful business - it's totally parasitic.
What's gone wrong with the way we organise ourselves that we put effort into nonsense like this rather than any one of hundreds of useful investments.
As it turns out, I know one of the original founders of Celoxica, from when they were a spin-out of a University hardware compilation group. Their business then was mainly tools, for direct hardware execution of algorithms. Many changes since then. I believe that he shares your view as to the current application focus. Does the world really need faster, deterministic, jitter-free ways of jumping down the economic rabbit hole whose Wonderland world is disconnected from the real one, producing no real or tangible economic benefit? It's a high value application until reality catches up and things crash.
@ssidman... I probably wouldn't feel myself being an HFT operator as well. This was a strong use case for floating point arithmetic. And from that point-of-view, it is business. From a financial industry perspective, it is happening anyway and yes some experts have more reservations than others, and the technology enablers to ultra-low-latency are driving it. I feel that financial experts are in best position to judge the impact on World's financial stability. What I meant to highlight is the opportunities that this growing business is opening for FPGA systems development. The intention is to elaborate in a follow through article on how/where does this relate and generate merit to the decimal floating point IP unit business.
@mkellett: Does any one ever consider the ethics and morality of this business...
I do and i'm sure others do also -- I just don't know what the answer is.
I have a somewhat simplistic view of the world that might be summaraize as follows: At the end of the day, true wealth is generated by someone picking up a shovel and digging something (e.g. a mineral) out of the ground, or growing something, or taking something and fabricating something else out of it.
When institutions like banks "make money" by buying or selling foreign currencies, or when traders "make money" by buying or selling shares, that money has to come from somewhere (and judging from the state of my bank account, I have a pretty good idea where it's coming from).
This isn't to say that the concept of a stock exchange is bad per se -- the ability to raise money to fund projects is very useful -- and I have no problem with the idea that folks who take the risk with their money should get some reward -- but things like high frequency trading do leave a "bad taste" in my mouth...
We used to ask ourselves the same question, also from an Islamic economy standpoint, where this may be seen as gambling. However, HFT is only a frequency dimension extension to the long existing capital market operations, arbitrage. If trading is ethical manually, it is probably more ethical quantiatively, and algorithmically--being more qualified and reasoned. If it is ethical at low and medium frequencies, then it probably is at higher frequencies. It's the regulator who should ensure how it contribute to the market stability and to the economy in general. The debate is on in US, UK, Europe, and S.E. Asia of "how high frequency" traders and venues should be allowed to go. And most of the market now is in the US and UK, while the EU is as usual more conservative with regulatory, so you'd find European traders setting platforms in NYSE, NASDAQ, Chicago, and London's FTSE.
Here is a link to a WSJ article on subject matter: http://online.wsj.com/news/articles/SB10001424053111903392904576512250007216020
"Independent research shows that high-frequency trading has a positive impact on global capital markets by increasing liquidity and reducing price volatility, findings that could allay some of the popular and political concerns over the strategy."
Dear rich.pell: The article isn't about evaluating the usefulnees of HFT. It is meant to highlight rising merit of reconfigurable hardware in an environment that used to have such merit only in exchange ticker plants, involving just a couple of suppliers; now in almost every component of every single participating entity with over a dozen serious US & European FPGA hardware developerment SMEs involved, not to mention as well dominant network systems vendors.
I'm interested as well. I don't know what the point of decimal cores would be. I know it was useful in the old (ENIAC) days, when binary->decimal conversion was hard. And the HP calculators used BCD processors, which avoided some rounding errors.
@Kevin... Decimal floating point arithmetic is essential when we want "natural" decimal fraction accuracy, most typicaly in a financial/monetary application. For e.g. when you want perfect decimal fraction accuracy to the 3rd or 6th digit beyond the point, or even past, binary floating point doesn't do (1/2 + 1/4 + 1/8 + ... covers only a small percent of all possible decimal fractions). DFPA is done both in software and in hardware. Several current processors adopted the simpler operations within their cores, and kept the more complex ones in software to save space.
When performance (latency) is an absolute requirement, full hardware realization is the solution. Being aware that this article's subject matter is highly statistical and hence can be thought of as inherently inaccurate, leading market index publishers for example require sometimes 10 decimal digit fraction accuracy. As well, many traders prefer to do the arithmetic in proper DFP to avoid accidental cumulative errors.
Kevin Neilson- It sounds like it might be difficult to do (efficient) decimal arithmetic in an FPGA; the carry chains and multipliers are all binary.
Back in the 1830s Charles Babbage attempted to do this with geared systems. Most of us have heard of the Difference and Analytical engines. These monsters were to have 30 columns of numbers moved by interposers. Most of Babbage's lifetime was spent in ways to determine what he called the anticipating carry. Some of his mechanical solutions look crazy, half stepped and elliptical gearing on the diagrams. Today we call this pipelining and I think half carry.
The solution I Like the best was from the analytical engine, where he used bolts. Like are used to lock doors. Each decimal digit had a bolt which would line up in a row when the interposer kicked in. This way the whole stack of 30 digits could be lifted quikly and with the force of gravity.
The "improved" method implemented in the machines built for his bicentenary in the late 1990s, use "clock" improvements and sectional gearing to do this 'pipelining'
The elliptical gearing shown on his later diagrams, are actually ways of splitting the clock so that there are 5 clocks rather than 10 to find the carry chain. The ellipses are in reality two gears, one running forward in time, the other running backwards in time from -5 clocks to 5 clocks. The results tabulated at zero.
As a firmware programmer, I do not know much about silicon (my classes in this were 30 years ago.) It does seem that the fundamental theory of decmai engines was pretty well worked out by 1850. The rest is just implementation.
For a fascinating social account on how these high speed trading machines were expected to be used in the 1850s read Charles Dicken's Little Dorrit. Even 160 years ago the big players had the advantage spending large sums on research to get that edge over the other players in the game. Back then the futures were in insurance and actuarial tables. betting on the lives of those who had money and were willing to insure the inheritance.
The talk about decimal arithmetic is strictly related to floating point fractions, no accuracy issue with integers. All arithmetic operations are done, verified, and commercially available as IP core units (http://silminds.com/ip-products/dfp-units).
are you repling to my note regarding decimal arithmetic? I am aware of the difference between floating point and integer. This is important in modern binary computing. I have a textbook on rounding errors in floating point. Decimal computing is done with fixed and floating point. There are just 10 states rather than two states.
In the 19th century, the difference and analytical were do deal with floating point numbers. Non linear functions that could be linearized through statistical probabilities. Why Babbage wanted 30 digits of decimal accuracy. His floating point was mechanical.
Few people have actually looked at the abstractions of his work. It is written in a language that he invented that no one can read. Statistically mapping out the carry propagations through multiplications and divisions. These could be broken down statistically into summations called differences.
The ultimate goal about any of this, is to find the right statistical model to predict the near future.
I have a text from the 1970 on this sort of modeling. Back when binary was new. One of the problems was to see how large a matrix your basic interpreter could invert. Need a large complex number, load it into a matrix digit by digit.
I personally see no reason why this can not be done directly in silicon.
Hello Sheepdoll: No I was replying to Kevin Neilson re his "It sounds like it might be difficult to do (efficient) decimal arithmetic in an FPGA; the carry chains and multipliers are all binary.... Now I recall that the 6502 processor had a BCD mode for decimal (integer) arithmetic. ..."
ka.aly - Hello @Sheepdoll No I was replying to @Kevin Neilson ...
I was also more interested in the comments. In re reading the article I noticed that it was about "Patented" Floating point algorithms for HFT. I must have missed the first paragraph thinking it was about silicon fab.
I find the idea of patenting something fundamental like floating point hardware disturbing. Especially given the open history of development 200 years ago. Especially in the higher maths and statistics.
Perhaps it is my age that I have the problem in protecting non physical implementations. Ideas are cheap. Physical implementations such as a fab or a data center is where the costs are. Power, labor, taxes etc. These should be rewarded by those who can make a better product and not game the system. (Which I consider tantamount to cheating.)
Given the speed of progress it seems that patent protections of ideas should be shorter. If one can not reasonably develop an idea in a few months, let someone else who can implement and improve it. Not lock it away in a vault to borrow funding against.
The mentioned DFPA cores were developed independent of application. It is financial applications that demand perfect decimal fraction accuracy. This may explain why mainly IBM (Power) and Oracle (SPARC) processors have adopted DFPA cores.
SilMinds Inc. is the patent holder, not myself, and I included the link to the data sheets if you are interested in technical/algorithm detail. They involve tens of person.years of research work (initiated by Dr. Hossam Fahmy, SilMinds CTO & Prof. of Computer Engineering). The first mature DFPA standard is IEEE 754-'2008', which is implementation independent. SilMinds commercial hardware implementations were novel at their respective time of development, and hence they ought to be protected of course (there was no other commercial offering until a year ago, when I became no longer involved with business development of this IP product, and I don't know of any yet.)
Now, HFT came into place because it is very latency sensitive and hence those wanting DFPA accuracy (rather than binary float or software workarounds) are likely to need it in hardware. There is a product model that isn't published yet. On another hand, banking and accounting, for e.g., are most accuracy demanding applications; but unfortunately are not that latency sensitive.
I disagree with you that IP work is cheap. Firstly, these cores are simulated, verified, tested with both software and in Silicon (FGA and on chip). But how do you want to sell them as hardware, they can only be part of a processor or part of an FPGA relaization. More generally, we are living a highly layered and specialized industry at this time. A digital designer may use the (knowledge) services of an electronics engineer to support his work, which the later may not be familiar with. Having come from a network services background, the routing expert is not necessarily a DSL or UMTS expert (layer 3 vs. layer 1).
Finally, sure ideas not carried all way to maturity are for sharing, publishing, and working on, more than patenting; though some may do patent them. Yet sometimes, it is sound innovative ideas that generate funds, while patents may not do. What matters to business is the potential impact and what matters to knowledge is the genereated progress or synergy to state of knowledge.
"Finally, sure ideas not carried all way to maturity are for sharing, publishing, and working on, more than patenting; though some may do patent them."
My frustration with IP is not about research development time. It has more to do with accessibility and ownership of ideas. Applied research is implementation. There is a physical goal or service involved.
I love history. If I knew how to do it I would loved to have a degree in history of technology and mathematics. This is why I collect old textbooks, some going back to the 18th century. It is also why I tend to write walls of text as I think in a more vicorian style of prose.
What concerns me is the departure from Yankee engineering, to the locked out guild system of Europe it was meant to counter act. We lament the lack of young people (and Women in these fields.) I love watchmaking as well as pipe organ building. I have a watchmakers bench, which I do my engineering on. Where I can hand solder down to 0805 parts.
Has I lived before 1880 in Europe it would have been illegal for me (as a woman) to own these tools. The guilds were restrictive. Masters had the right to destroy inferior work. This was often used to keep the apprentice in check, should the apprentice make the master look incompetent.
Such worked well for hundreds of years. Till the printing press happened and revolution came about in the 18th century. Now we have the concept that all are created equal and should have fair access
We are in more dynamic interconnected world. We can find out that what was once rare, where someone might see a toy or an idea once in a lifetime, to a world where these ideas are common.
I think the concern with HFT is that it gives an unfair advantage. That if one was say to use the same equipment to play in a casino, it would be frowned upon.
At the moment I am interested in statistical analysis of recorded music events. The idea of fast accurate decimal floating point interests me. I got interested in non linear mathematics after reading a biography of Johnny VonNuemann. Where one learns that 3 is not 30 times 10. Such is abstract and hard to explain without walls of text ad graphs.
Should silicon go the way of 3D printing and assembly, It would be nice to have access to something that might not work in one field, but could be adapted to another.
This is the heart of the maker/designer movement. To open up ideas to those who can best implement them.
As I kid I got to take field trips to the landfill/dump, we would use the refuse like old typewriters to make art project. Why can we not do the same with old silicon cores and obsolete software that can only run under MS-DOS or OS9?
Thank you for sharing your background and interests. I love history too, geaographical and ethnological history; and you actually brought up the exciting area of the history of computing. I find this dialog enriching, even that you got strong opinions. I appreciate learning of different likes and perspectives because it boosts one's own self perspective. I am, unlike yourself, with full respect to yourself, not a handy person. I am a systems person who's into logic and modeling (broadly). I strongly believe in knowledge economy and services-oriented society and I deal best with intangibles.
One thing I still wish to comment about. You wrote "I think the concern with HFT is that it gives an unfair advantage. That if one was say to use the same equipment to play in a casino, it would be frowned upon."
I researched the HFT topic within my former role as SilMinds Head of Technical Business Development. Simply since it demands complex computational speed and it's got possible merit for accuracy. Trading has been there for ages. Probably you and I, and Max, don't see much fun in it. What I want to say is that the gambling aspect is associated with security trading in general, and not specifically in HFT. My study revealed that over 80% of current trading is algorithmic, and conducted by performant servers from outside of the floor. There is low, medium, and high frequency trading. If we think of the trader's mind, this is "highly" qualified gambling. I don't like gambling anyways. If we think of the capital market perspective, trading is necessary to valuate and stabilize, and without it world economy would crash. So, somebody (who likes it) has to do it.
However, it is driven by enabling technologies, and once there it motivated higher technology for lower latencies. Isn't this how it works with technology cycles..? Performance merit stimulates bigger and faster hardware, which in turn drives more possibilities requiring hardware advancement. I saw this as clear case. And afterall, all major exchanges now (something you could publicly view) deal buy and sell orders in millisecond time units; and you'd see multiple orders within the same ms 'tick'.
Back to fairness, yes... The trader who's got more clever algorithms running over faster software and hardware platforms have better chances. It is fair because he/she has invested more in "Financial engineering" education and also in pltaforms and algorithms development.
@Kevin: I don't know what the point of decimal cores would be.
Some countries madate that any financial calculations are performed in such a way that you get exactly the same result as if you'd performned them with pencil and paper. If you work in base-2 (binary) you may introduce errors -- the bottom line is that you have to work with a base-10 decimal representation -- but most CPUs don't perform calculations on decimal representations very efficiently -- you can do much better using custom cores created in FPGAs...