High Frequency: An interview with the Editors of Wiley's latest interdisciplinary journal on data science


  • Author: Statistics Views
  • Date: 04 May 2017

This year, Wiley has been proud to launch High Frequency, the latest journal in our wide-ranging and diverse journals portfolio.

High Frequency is the first and only highly interdisciplinary journal devoted to high-frequency data questions. It attracts and welcomes papers from a wide array of disciplines, on questions including HF data assimilation, analysis, and/or methods for decision-making. High Frequency promotes mathematical and statistical modeling, empirical studies, computational theory and design, with applications to many topics including finance, astronomy, seismology, various other areas of physics and geosciences, environmental sciences, imaging applications such as in neuroscience, and more.

Alison Oliver talks to Editors Dr Ionut Florescu, Research Associate Professor in Financial Engineering Department Division at the Stevens Institute of Technology and Director of Hanlon Financial Systems Lab and Frederi Viens, Chairperson of Statistics and Probability at Michigan State University, USA about the new journal and its launch.

thumbnail image: High Frequency: An interview with the Editors of Wiley's latest interdisciplinary journal on data science

1. Congratulations on being appointed Co-Editors of High Frequency. How did you first become involved with the Journal?

Frederi: We had been running a conference together and we had this idea which actually originally came from a discussion amongst the five people who are the journal’s five founding editors. I’m not exactly sure now who came up with the concept originally, but what was interesting was that we wanted to start a journal in high frequency finance. I think from the five of us, there were more people who were interested in the high frequency finance side than in the purely finance side of it. I remember a discussion that I was having on the streets of Manhattan with Ionut around 2010. Ionut and I began further discussion and came to the conclusion that this should be a much broader journal, but the original proposal that we had offered to Wiley and to another publisher didn’t have those broader aspects, and so I would say that when we talk about the journal itself and the way it ended up, it was really a very long process to figure out what it was that we were really trying to offer. It took years to figure out that, in fact, we didn’t want to just do high frequency finance and I think it was that conversation Ionut and I had that day in Manhattan that was the turning point.

2. Please could you describe to us what is meant by high frequency?

Ionut and Frederi: What we mean by high frequency is actually very simple: you take any field of application where there’s going to be data that’s coming in over time—some people like to call those things time series—and you’re watching the data as it’s coming in, and if the decisions that need to be made based on that data are being made significantly more frequently than would usually happen in that field, then that’s high frequency. So that’s how we define it. It has to be decisions being made based on data that’s coming in over time at a rate which is higher than what is normally done in the field. And significantly higher.

We like to give a couple of examples: in finance, even without talking about ultra-high frequency, high frequency is information which may be coming in every few seconds, or every second, or ten pieces of information every second, those kinds of information rates in finance. If you look at people who are working on macroeconomics, where a lot of things are done with one data point per quarter or per year, well then high frequency would be an analysis which is done over the same period of time perhaps, but with one data point every week. So you see the two extremes there: in finance, you might be an algorithmic trader whose horizon is the end of the trading day, things are happening every second and you have to make decisions that frequently in order to keep up with all the other market actors who are trading that frequently; in macroeconomics, decisions are made which affect a multi-year period, and then if you have a way of making a decision every week which makes a difference to the outcome over several years, then that is high frequency. You see what I mean?

Another example is people who are working in hydrology, which includes the study of water flows in rivers, lakes, dams, wetlands, etc., as part of environmental sciences, or the geosciences, : folks there are also beginning to understand and advocate that high frequency can be valuable. In hydrology, if you’re making decisions based on data that you’re gathering every day, that could be your high frequency. Or maybe it’s every hour, because you need to understand the difference between what happens in the morning and what happens in the evening. For instance, it may be impossible to understand the water balance equations for a specific lake in a wet region without looking at high-frequency intra-day rainfall data. That’s not a typical rate; people typically will be looking at hydrology from a standpoint of one data point per week or per month or per year, but if it’s every day or every hour, that’s high frequency.

Another class of examples is in astrophysics. Let’s take the case of a radio-telescope which has very high capacity for taking images of the sky in radio frequency. Depending on how much information is flowing through the telescope, we could be looking at enormous quantities of information. Back in 2010 when a decision was being made on where to locate the so-called SKA (square-kilometer array) telescope, project leaders in Australia, New Zealand, and South Africa were already estimating that the flow of data would be comparable to the flow of information in entire sections of the Internet at the time. The best estimate for when the SKA telescope reaches its full data collection capacity in 20205 is that it will generate on the order several exabytes per day. This is only about one order of magnitude lower than the projected daily data flow across the internet in the 2020’s. When one understands that this information flow will be continuous, one sees a big difference with other areas of astrophysics, where people are taking an optical image of the sky a few times a night.

3. What makes High Frequency different from other journals in the field?

Ionut: High frequency originated in finance due to recent suitable data availability. However, the models developed for studying financial data transcend this domain and are applicable to multiple areas in science and technology. Thus, in large measure, the new journal and in smaller measure the conference series are now dedicated to studying data sampled very frequently in all areas of science. This dependence on multi area models makes the journal unique today as the sole reference for multiple areas of science which are traditionally distinct.

Frederi: Ionut’s answer is spot-on. What makes it unique is that we really are not at all wedded to a single discipline, and what we’re trying to do with this journal is to coalesce a group of scholars around a topic of high frequency data science and to try and see if we can make that into a new discipline in its own right.

There’s the data side of things, but there are a lot of researchers who don’t necessarily identify as data scientists, even though they’re using and analyzing data, and those people also have their idea of what high frequency analysis means, and it doesn’t necessarily have to look anything like statistics or computer science. We’re just trying to get everyone in the conversation to see how we can develop this into not just a new field, but a new discipline on how to deal with the data that comes in via high frequency. Then one is able to make decisions using all the information that’s there in the most efficient possible way, whilst simultaneously being able to handle the rate at which the data is arriving.

So we’re kind of trying to use this journal as a way to develop this new discipline.

4. Who should be reading the Journal and why?

Ionut: The audience of the journal is mainly threefold. Traditional academics looking for a venue of multidisciplinary work that does not fit the confines of a traditional area/journal; industry professionals looking for models that are directly applicable to their work; and finally and more specifically to finance, government regulators looking for evidence of behaviour and detection of anomalies in trading data.

Frederi: There are two audiences, I would say. One of them is the people who we’re targeting for writing for the journal. But there’s another audience which is people who we know are looking at data which is on high frequency, and they need to make decisions based on the information they’re getting, but they don’t necessarily have the tools or they don’t have anybody in their organization who knows enough about that sort of analysis. We wish for some of the articles in the journal to be extremely accessible so that people from completely different fields can actually understand what the motivations are behind the methods and understand that the methods could be applicable to their field, even if they’re describing a different field.

We wish to encourage articles which could have a broad impact in fields that are completely distinct and may not even be recognizable compared to the field that the article is being written about. These are articles which could be very helpful to all sorts of people faced with similar problems, and we’re going to target those articles asking the authors to write a separate version that is much shorter and condensed. These shorter articles will explain to the broadest possible audience what the tools are about, what their merits are, and what the impact could be so that others can be inspired and then even dig into the original article to see how they might be able to apply those tools to their area.

5. What are the kinds of papers that you would like to encourage?

Frederi: Typically a paper will be motivated by a real-world problem, most frequently with a conclusion that proposes a data analysis and answers questions. The paper has developed new methodology, typically, whether it is statistical, economical, physical, or biological. Some areas will propose uncertainty quantifications in their answers, and by uncertainty quantification, we mean, “Here is what we think the answer is, but we’re not completely sure, so here is, say, a 95% confidence interval, or here is the distribution that we think the parameter of interest follows and how sure or unsure we are.” Then we want to promote areas of science and engineering where high frequency data is becoming common but specific methods are not being applied. Or some of the new methods in areas like finance could be helpful in these other areas; we already mentioned astrophysics, environmental statistics, and also seismology, analysis of networks.

Some machine-learning algorithms are capable of handling very complex high-dimensional data, but the computational demands are too high to propose a method which can handle a steady stream of HF data; i.e. most case studies look at a very small part of the data available, and do not propose methods which provide analyses and answers at a rate which is comparable to the rate at which the data arrives. The new "HF" emphasis in this side of data science would have to be in the development of agile and compact algorithms which still get the job done. There is a trade-off between agility and accuracy; figuring out this trade-off is happening every day in high-frequency trading, but not necessarily in other fields where it could also be important.

Ionut: We are looking for papers that have a directly applicable component as well as papers that explain the importance and impact of the work developed in the paper with some sort of data that is sampled with high frequency. It doesn’t matter where the data is coming from, but the paper must have some sort of data analysis component.

Frederi: We don’t necessarily require that people solve problems with data sets. Of course, most of the articles will, but we are happy to have people write theoretical articles about what might happen given a data set with some sort of specificity and have mathematics being written about that topic. So that would be the most theoretical end of it. And then the most applied end would be talking about something which is completely specific to an application area where there are data sets, where there’s a question and people use a method and they answer it without even necessarily having a major emphasis on developing a totally new method; just being able to handle a certain flow of information at a certain rate which is significantly higher than usual for that particular field, and providing an answer, and doing that in a novel way, that’s great for us.

6. What are you enjoying most about being Editor so far?

Frederi: Ionut and I really enjoy working on Scholar One! It works really well. I’ve been fortunate enough to be able to travel around North America to meet a few members of the editorial board to discuss directions that the journal should be taking in their particular areas of expertise. So I’ve been enjoying that, and also I’ve enjoyed a workshop that Mark Spencer organized at the Joint Math meeting in Atlanta in January. What Mark did was to bring in a group of early career scholars who are trying to make their mark in academia and figure out what kinds of papers they want to write, and Mark was explaining there what kinds of papers the journals that he handles on the math and stats side, what kinds of things he’s looking for, what he thinks makes a good paper. And not only that, he was also explaining to the people there what makes an impactful paper and how to increase the chances of having an impact in a way that you write papers. So I enjoyed listening to what Mark was saying there, and I took that opportunity, since I was one of the panellists in this panel that he organized, to promote the journal, of course, because that was one of the main reasons I went, and also to explain how Mark’s vision for what makes a good article would fit into the things that we’re looking for in our new journal.

7. What are your main priorities/objectives for the Journal in the year ahead?

Ionut: I hope we gather enough traction and submissions to take the journal to a proper level.

Frederi: I agree with Ionut in that we want to aim for the highest possible quality; and just encourage as many authors as possible to consider our new journal.

It’s always a bit difficult when you’re starting a new journal, because early career researchers are going to hesitate to publish their work in a new journal since it’s not necessarily indexed and the journal hasn’t really established its whole reputation. So that’s a challenge for us, and we’re going to continue to encourage the members of the editorial board to submit their work to the journal, to encourage all their colleagues to submit, so that’s by far is the biggest objective for us - to have a proper stream of articles so that we can get the journal to where we want it to be at the end of the year.

8. The Journal will feature research from the Stevens Institute of Technology and their annual conference. As Journal Co-Editors, how does the relationship between Journal and Institute work?

Ionut: Both EIC are also members of the organizing committee of the conference. Therefore we are envisioning a play between the two where the best presentations are encouraged to submit to the journal and best papers submitted and accepted are also invited to participate in the future conference editions.

Stevens is very close to one of the Wiley headquarters (Hoboken) and because of this proximity, we were able to do some visits, and then that basically convinced our guys at Stevens that this is a worthy initiative. Stevens is now supporting this endeavour by encouraging submissions for publication in this journal.

9. Please could you tell us more about your educational backgrounds and how you first became aware of statistics as a discipline?

Frederi: That’s kind of a funny question because I was a very strange child, and when I was ten years old, I was obsessed with statistics. I was very aware of descriptive statistics when I was a young boy. And then after that, people take different trajectories, and I thought I wanted to be a physicist, and then I became a mathematician. I have a PhD in mathematics, which is not the same as statistics, but the field of study that both Ionut and I have is probability theory, and that is the foundation of statistics in many ways. People today may start to try to disagree with that, but it’s still going to be true that you can’t do a lot of statistics if you don’t know probability theory.

I do have several papers which are actual stat papers, including on the very applied side: I’ve been working on some applied problems where we use existing methodology in novel ways. So I consider myself a statistician on the theoretical side, on the applied side, as well, but still also a mathematician.

Ionut: I have a Ph.D. in Statistics from one of the best statistics departments in the world (Purdue University). Of course partially due to my education I believe statistics and probability allow us to look and understand real phenomena without actually having the complete information and the whole spectrum of interactions and complexity inherent in real life events.

For me it was actually computer science that pointed me to probability and statistics. I was fascinated with programming and computers. I wrote this program when I was eleven years old. My folks just got me one of those Spectrum 64 KB computers that required a TV screen to work and that were primarily used for gaming. This particular one was using a tape player and cassettes to load games. You had to use the tape player and listen to a very annoying sound while the game is loading in the next 15 minutes. Oftentimes, after ten minutes there would be a sound error and then it would tell me ‘error in loading’. It was very frustrating and sometimes you didn’t want to load games anymore.

However, the particular model I was using came with BASIC programming language pre-installed. Meaning that upon powering on you could program in BASIC without having to wait for any loading. So, dejected of the repeated failures I actually started learning to program in BASIC. I wanted to have the computers do things so I wrote programs - without even knowing what I was doing. It was very cool. And the one thing I actually noticed, there was this function RAND, R-A-N-D, which was creating random numbers. I didn’t entirely understand at the time but I noticed that you could create these numbers and they would be different every time. Then you could do all this other interesting stuff such as using those numbers to select a colour and location and thus creating a fascinating kaleidoscope of colours on the TV screen.

So essentially that opened my eyes to probability, and that eventually got me into Purdue and my PhD where my supervisor was Frederi! I think the applicability of statistics is what fascinated him. Even if he didn’t have formal training in statistics, he has very strong formal training in probability, which is the basis for statistics and for everything that we do.

10. From your experience, are there specific challenges in conveying or teaching statistics concepts within business and finance?

Ionut: I believe the most important part of teaching statistics specifically for business and finance is the professor. The teacher needs to be aware of all the latest developments and also need not limit him/herself to just basic regression. Econometrics, for instance, as it is currently taught in business schools all over the US, is a course in dire need of restructuring. The impact and the connection with probability needs to be acutely ingrained into the student’s understanding. This may not be achieved using a point and click approach of a commercial software.

Frederi: Ionut has had more experience than I’ve had in trying to teach actual statistics to these folks who are interested in finance, so his answer is probably better than mine. What my experience is is that it’s difficult for certain people who are really coming in from the finance and business side, to be able to handle the level of mathematics which is required to understand how statistics in finance can really work. So it’s a real challenge. Now that I’m a department chair, I don't get a lot of chances to teach this material anymore, but I’m constantly thinking and discussing with my colleagues on how to change the way that we work the curriculum for these quantitative finance courses – how to change them so that we’re able to explain the mathematical modelling; the probability theory; how to do the statistics, and try to reach a broader audience of people who haven’t had adequate mathematics training due to their business backgrounds, but are still very smart people and have the means of understanding a lot of the tools that we use. So that’s the real challenge for us, and it’s a work in progress, and we’re always trying to reach the broadest possible audience of students.

11. What has been the most exciting development that you have worked on in teaching business management or statistics during your career?

Ionut: I have taught statistics classes to business majors and the most rewarding aspect of it comes in a subsequent class that requires the knowledge gained in said statistics class. For example, I taught time series and have seen a clear distinction between students that took my version of proper statistics versus students who had an econometric background.

Frederi: In the year 2010-2011, I worked in Washington DC as a foreign service officer for the US Department of State when it was under Hillary Clinton. I was working for the Africa Bureau, and over 200 people in the building plus all the ambassadors and the consular personnel overseas all had access to me. They would throw a document at me which had some technical elements they didn’t understand at all because their training was in the foreign service, in diplomacy. They had to be able to understand what those technical pieces were, and my job was to inform everybody, to check people’s language in their cables to make sure it was correct, to make sure people weren’t barking up the wrong tree.

So I was the scientist and engineer for the whole Bureau for a year, and that really opened my eyes to a whole other world of inquiries that most academics will never see because they don’t have the opportunity to see very very far outside of their field. That’s one of the reasons I’m trying to understand how we can use certain quantitative methods in all sorts of different fields in a practical way. My time at the State Department was actually when I found out about what was going on back then in astrostatistics; it’s much more developed today, but already back then it was apparent that there were problems that needed solving.

That’s also when I realized when you’re trying to solve the world’s biggest problems of what to do about development and global economics, that there’s one issue that you can’t ever get around, and that’s the issue of how to have a resilient food system which doesn’t overwhelm the planet. That itself is a question where statistical inquiry is underdeveloped, and that led me, several years later, to work in agricultural economics. This is a field where high frequency has a very important role to play, because the people need to be reacting in real time when they’re trying to figure out the best way of planting their crops, but the information is such that people don’t even know that there are certain tools that would be beneficial to them, that we have an entire discipline where statistics, let alone high frequency data analysis, is not even on most people’s radar screen. These are the type of things that made me think we can do something really new with this journal, because there’s a whole world of people out there who don’t even know these questions exist, and among the people who live in that world are those who are working on the world’s hardest and most wicked problems in terms of the development of society, in moving forward for this century.

12. Are there people or events that have been influential in your career?

Ionut: My professors at Purdue especially Frederi who was my advisor, and also Thomas Sellke, Herman Rubin, and Philip Protter, my former mathematics teachers in Romania, chief as among them professor Valentin Nicula. There are so many people that shaped my knowledge and made me who I am today.

Frederi: I’m very flattered that Ionut has named me. It’s true that I was his PhD advisor, but we’re almost the same age, and we have the same level of maturity and all that. So thank you, Ionut, for mentioning my name, but you’ve probably influenced me as much as I’ve influenced you.

I’ve had many PhD students, and Ionut was my first PhD student, so you can imagine what kind of impact that has on somebody’s career, when somebody has been able to advise so many different students, and the first student is of course somebody very special in their career.

Interestingly Ionut mentioned Philip Protter who also had an extremely important influence on my career. I would say the entire African Bureau in the US State Department had an extraordinary influence on my career too, and I can’t mention specific names, but there’s many people there that I was influenced by.

At Purdue University, when I started working in agricultural economics, there was one person that I think I need to mention and that’s Professor Thomas Hertel. He’s a famous agricultural economist, and a lot of the work that I’m doing today on the applied side is almost entirely thanks to his encouragement.

Another professor in the same department of agricultural economics with whom I’m working still today is Otto Doering, who is a living legend in the world of development economics. So Tom Hertel’s influence was in helping me understand the field and knowing what people are looking for and being able to help them as a statistician. Otto Doering’s influence was to not be afraid of working on problems that no one else is willing to touch. In fact, he and I are working with a team of students on a problem where some high frequency data analysis may need to happen. There’s an area in the Central Sahel, in Africa, which is called the Lake Chad Basin. People are trying to understand why Lake Chad has so many variations; why the lake goes up and down in terms of area and whether people are having a negative influence or positive influence on it. It’s an extremely difficult question as there’s very little data as it’s such a difficult part of the world. The terrorist group Boko Haram operate there. So there’s no way we can travel to this region to find out for ourselves. We have to base everything on either satellite data or data which is provided by the four neighbouring countries, and that’s extremely difficult. But with Otto’s help and encouragement , we will get to the bottom of this difficult problem.

Related Topics

Related Publications

Related Content

Site Footer


This website is provided by John Wiley & Sons Limited, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ (Company No: 00641132, VAT No: 376766987)

Published features on StatisticsViews.com are checked for statistical accuracy by a panel from the European Network for Business and Industrial Statistics (ENBIS)   to whom Wiley and StatisticsViews.com express their gratitude. This panel are: Ron Kenett, David Steinberg, Shirley Coleman, Irena Ograjenšek, Fabrizio Ruggeri, Rainer Göb, Philippe Castagliola, Xavier Tort-Martorell, Bart De Ketelaere, Antonio Pievatolo, Martina Vandebroek, Lance Mitchell, Gilbert Saporta, Helmut Waldl and Stelios Psarakis.