This is a living document in which I keep a checklist for writing good research articles. While the document is prone to grow, I will try to keep it brief and skimmable!
What constitutes good research is a tough questions, which is why the most common deliverable of our work are peer-reviewed articles, to allow a community of researchers to decide what good research is and to evolve the “definition” of good research. The list below will help you please reviewers, which will help you attract more readers and iteratively lead to better research. Try to think of the list from both sides: Sometimes you are reviewed, sometimes you are the reviewer. Why was your article accepted in the past? Why was your article rejected in the past? What did you review in the past that you liked? What did you review in the past that you disliked?
Your abstract is your paper. Write your abstract in such a way that it condenses the paper, the whole paper, and nothing but the paper (so help me God). Having one abstract sentence per paper section is a good start.
Your introduction is your paper. Write your introduction in such a way that it condenses the paper, the whole paper, and nothing but the paper. Having one introduction paragraph per paper section is a good start.
Problem statement and contributions must be on the first page. Otherwise you put the reviewer in a bad mood and risk getting your article rejected. “Why the heck should I bother reading the other 9 pages of this article?”
Do you clearly distinguish problem statement, research question and hypothesis? You might not have separate heading for each one of them, but they must be clear in your head! If you have any doubts about the terms, please read the associated Wikipedia articles.
How do you test your hypothesis?
- Mathematical proofs:
- Principle: Formulate some axioms and derive conclusions.
- Pros: Mathematics is the only way to prove that something is true, the other methods bring evidence.
- Cons: The chosen axioms might not be useful (e.g., assume the speed of light was infinite) or the proof might be intractable.
- Analyzing existing data:
- Principle: Pick existing data (e.g., Wikipedia traces), analyse it and derive conclusions.
- Pros: You can make strong statements about the phenomenon that the data captured. Notice that this is only evidence and not a proof! You might also do convincing projections or use data to make other hypothesis testing more convincing (see below).
- Cons: Existing data does not help test “what if” scenarios. Also, any interpolations and extrapolations you make might may look subjective and unreasonable (since the price per GB halves every 18 months, storage costs will become negligible in 2020).
- User studies: Pick n people and ask them something about your system. Very useful to quantify subjective issues, such as website responsiveness. Rarely used in systems research. See Raft as a very good example of how to test the hypothesis “A is easier to understand than B”.
- Simulation (“simulated” experiments):
- Principle: Create a system that abstracts a real system, i.e., it implements the important behaviour of the real system relative to the problem. For example, two 3D matrices of temperature and humidity with some equations could simulate whether in Umeå.
- Pros: Generally quick to build, quick to explore many initial conditions. Can explore systems that are not yet or not easily available (e.g., telco clouds).
- Cons: Hard to get right. What are the important behaviours of the real system? For example, popular network simulators have failed to capture all important behaviours of networks.
- Emulation (“real” experiments):
- Principle: Create a system that is as close as possible to the real system. Some parts are emulated, i.e., their externally observable behaviour cannot be distinguished from a real part. For example, a network emulator can produce latencies and bandwidths typically found in the Internet. A workload generator can produce requests similar to human users with a browser.
- Pros: Emulation can be a shortcut to getting the important behaviour of the real system right, e.g., latency spikes due to hardware energy saving, but there is no guarantee. In some communities, real experiments can be more convincing.
- Cons: Experiments can be tedious to set up, difficult to reproduce and easy to bias. Also they tend to be highly time-consuming, unless you somehow emulate time. :D
- Principle: Deploy a new system in production. E.g., “We replace the file-system of our university mail server with Btrfs.”
- Pros: Nobody can contest realism, usefulness, etc.
- Cons: Highly unlikely to be accepted by operators, except if the system is in beta or the users are desperate for a solution to their problem (consider founding a start-up in the latter case :D).
Consider combining these approaches, i.e., simulation on a large problem space, then real experiments to validate the simulation.
- Mathematical proofs:
Are you embedding your solution or experiment in the problem statement? Of course, your experiments should be based on the problem statement, but make sure you did not change the problem statement to favour or simplify your experiments.
Is your experiment capturing the important aspects of the problem? If your problem is network latency and your state-of-the-art, feature-rich cloud simulator has no support for network latencies, then it is useless. Similarly, network emulators might not produce precise network latencies.
What are the limitations of your hypothesis testing? Will your solution work if the number of servers is increased 100-fold? What if you use a 100Gbps network instead of the 1Gbps in your experiments. Do not get demoralised, but rather find strength in arguments: “The usefulness of our empirical evaluation may be diminished with the commercialization of new cloud services. […] Our performance evaluation results remain representative for clouds that multiplex their resources among their users […].”
Can you cite at least two articles from the past year(s) from the target venue? This might sounds like ass-kissing, but is a very useful tool to ensure your article fits the target community. If you cannot related to any articles in that venue, chances are reviewers will reject your article with weird comments (“I don’t get why auto-scaling is important, Amazon already does that!”). Try to read those articles at the “meta” level:
What vocabulary do they use? You should definitely write a few lines about cloud computing in CDC, but you would certainly not waste paper on defining “cloud computing” in ICCAC. (When I review such papers I get sleepy starting with paragraph 1.) Are terms such as “node”, “network”, “bandwidth”, “response time”, “throughput”, “controller” used with the same meaning as you have them in your article? (E.g., “network” might be understood as a graph, a social network or a computer network. “Node” might be a network router, a graph node, a supercomputer “server” or a peer-to-peer agent.) If the terms are used differently then make sure you properly define your terms to avoid confusing the reviewers.
What concepts and techniques are assumed known? You would certainly not write about PID controllers in a control conference, but you might need to insert a mini-lecture on PID controllers for a peer-to-peer conference.
What are accepted forms of hypothesis testing? Does that community favour simulation? If yes, is there a “golden standard” (e.g., NS3, CloudSim, PeerSim, SimGrid)? Do you or can you use it? If not, make sure you argue why you refrained from using it!
Does the community favour real experiments? What workloads do they commonly use (Wikipedia traces, FIFA traces, synthetic ones)? What applications (RUBiS, RUBBoS, CloudSuite)? What is the most common nature (single computer, cluster, data-center, IaaS cloud) and size of the testbed?
Any seminal papers that you should read? Certainly you would reject the paper of somebody who mixes IaaS and PaaS, because of not reading the “NIST definition of cloud computing”.
What are their core challenges? “Data-centers are inefficient”, “Moore’s law has stopped”, “Software systems are getting increasingly complex”, “Cloud applications encounter failures”. Does your paper address the core challenges?
Are there certain expectations? E.g., to open-source simulator, to have at least 2 performance-related plots, to have around 20 references, etc.
If you see a mismatch between your article and the related articles from the target venue, you have several choices:
- Adapt: Add another experiment with a new workload or application. Change the terms you use. Add another sentence to link your problem to the core challenges. Rethink your problem, your solution and your evaluation. Don’t overdo it though, you do not want to produce YAPO (“yet another paper on”).
- Use more wording: Spend a few extra paragraphs to properly define your terms. In fact, make the terms bold when defining them to ensure your reviewer does not miss definitions. Spend more words on explaining your approach, your technique, your solution. Argue why you did not use the golden standard: “We refrained from using SimGrid due to a bug that exaggerates latencies of packets sent on localhost.” (true story)
- Change venue: You will never get an article accepted in JSSPP about Wikipedia traces, just like you would not accept an article about food diet in the cloud control workshop. Don’t waste time and look for a venue that is desperate to read an article like yours!