We are doing a lot of work at Reuters these days to understand and then adapt our services to the increasingly different requirements of human and machine users.
Reuters has a 155-year history of serving the information needs of human beings around the world but Reuters was also early to recognize that in the financial services industry many of the consumers of our data were in fact other machines.
The two co-existed comfortably for many years with feeds of Reuters data being routed to terminals for human display and being drawn upon by a variety of customer applications such as risk management or end-of-day portfolio pricing systems.
However since the beginning of this century the rise of algorithmic trading coupled with the explosion in derivatives and regulatory changes have created a situation in which the needs of humans and their machines are diverging. This in turn has profound user interface as well as systems architecture ramifications for Reuters and our customers.
To provide market data and other information for machine consumption the key attributes are a comprehensive yet extensible data model to represent the information being delivered; rich metadata which describes the underlying content in a way machines can in turn use that content more fully; a published API to make it easy for customers to write their applications to use the provider’s data; a published and extensible symbology set (like Reuters RICs) to allow applications to identify the data and associate it with companies markets and instruments; and finally and of increasing importance raw speed in the form of super low-latency datafeeds.
Back on the human side things have not sat still either. So instead of just a flashing screen of impossibly fast moving news and data customers need increasingly sophisticated analytics to make sense of the torrent of available information; they need intuitive design intelligent search and far better graphics to find and display the information; and they often need a reliable mechanism to slow-down the data and only pass along changes that affect valuation rather than seeing how many times an instrument can update in one second.
In short the separation of so-called "alpha" and "beta" which has been discussed for some time in financial markets has as its corollary a separation of the man and machine involved in its production. It is overly simplistic to say machines produce "beta" (the ability to replicate market risk) and only humans can achieve "alpha" (the ability to outperform the market); however the specialization and separation of tasks between humans and machines is likely to accelerate as processing power continues to increase with new multicore CPUs data mining and other analytical software improves and the performance record of passive index funds and ETFs vs. absolute return funds vs. traditional long-only funds is better understood.
Another facet of this same phenomenon can be seen on the trading floor as machines increasingly replace humans in high-velocity thin-spread markets such as spot FX cash equities and US Treasuries while expensive humans reach for higher spreads in (as yet) less transparent markets such as credit and other derivatives and structured products.
All of this is good news for Reuters as we designed our Core Plus strategy two years ago to capitalize on these trends.
Tom I enjoyed your post on something that has been a hot industry topic for some time now. I am also aware of Reuters desire to bring to market ‘machine readable news’ which I have no doubt will be well received by your customer-base. As you rightly point out there is a need for “rich metadata” to be associated with all ‘feed based’ content as these are the markers by which the trading applications run. My question is whether you plan to provide manually applied tags with each news story to describe your machine-readable news (detailing the relevant company industry investment theme etc for each story…) or if you are looking to develop a component to be deployed within the traders’ environment that allows the trading operation to run any news source through this ‘component’ and automatically understand what a story is about? The ‘component’ will automatically apply the same descriptive tags to each story (irrespective of where the story has come from) after which the stories can be fed into the trading application. I have often thought that a “machine that reads the news” would be infinitely more powerful than just having access to a single ‘pre-tagged’ news source (no matter good the quality of that individual news source). As a trader I would rather have a tool that allows me to view and monitor every news source (that I choose) as a feed and understand what each story is about from a programmatical perspective. A single categorised reliable source of news in machine-readable format is certainly interesting (a la Reuters machine readable news) – but do you agree that a broader perspective on this issue could be of vastly more value to the customer-base? I know Dow Jones are tagging up their news as well and I have no doubt more will follow in due course quickly giving rise to another commoditised content set….. What are your thoughts on these two very different approaches? Many thanks Tom
It might be many more years (may be even few decades) before machine can actually replace man in the trading arena. Doing automated training using financial data and P&L information is different than dissecting information in its right context reading market sentiment and taking wise decision. Should remember that the “Artificial Intelligence” and robotics boom cited as early as 1980s is yet to materialize!