Recent Changes - Search:



Site Map


Intranet.PaperWrite History

Hide minor edits - Show changes to output

Changed lines 35-36 from:
**Abstract, typically not more than 100-150 words;
**Introduction (brief!): introduce problem, outline solution; the statement of the problem should include a clear statement why the problem is important (or interesting).
**Related Work (or before summary). Hint: In the case of a conference, make sure to cite the work of the PC co-chairs and as many other PC members as are remotely plausible, as well as from anything relevant from the previous two proceedings. In the case of a journal or magazine, cite anything relevant from last 2-3 years or so volumes.
**Outline of the rest of the paper: "The remainder of the paper is organized as follows. In Section 2, we introduce ..Section 3 describes ... Finally, we describe future work in Section 5." [Note that Section is capitalized. Also, vary your expression between "section" being the subject of the sentence, as in "Section 2 discusses ..." and "In Section, we discuss ...".]
**Body of paper
***approach, architecture

The body should contain sufficient motivation, with at least one example scenario, preferably two, with illustrating figures, followed by a crisp generic problem statement model, i.e., functionality, particularly emphasizing "new" functionality. The paper may or may not include formalisms. General evaluations of your algorithm or architecture, e.g., material proving that the algorithm is O(log N), go here, not in the evaluation section.

**Related work, if not done at the beginning
**Summary and Future Work
***often repeats the main result
**Appendix (to be cut first if forced to):
***detailed protocol descriptions
***proofs with more than two lines
***other low-level but important details
It is recommended that you write the approach and results sections first, which go together. Then problem section, if it is separate from the introduction. Then the conclusions, then the intro. Write the intro last since it glosses the conclusions in one of the last paragraphs. Finally, write the abstract. Last, give your paper a title.
Added lines 58-63:
*Avoid all but the most readily understood abbreviations.
*Avoid common phrases like "novel", "performance evaluation" and "architecture", since almost every paper does a performance evaluation of some architecture and it better be novel. Unless somebody wants to see 10,000 Google results, nobody searches for these types of words.

->Use adjectives that describe the distinctive features of your work, e.g., reliable, scalable, high-performance, robust, low-complexity, or low-cost. (There are obviously exceptions, e.g., when the performance evaluation is the core of the paper. Even in that case, something more specific is preferable, as in "Delay measurements of X" or "The quality of service for FedEx deliveries".)

*If you need inspiration for a paper title, you can consult the Automatic Systems Research Topic or Paper Title Generator
Added lines 44-45:
*Acknowledge your funding sources. Some sources have specific wording requirements and may prefer that the grant number is listed. The NSF requires text like "This work was supported by the National Science Foundation under grant EIA NN-NNNNN."
*Generally, anonymous reviewers don't get acknowledged, unless they really provided an exceptional level of feedback or insight. Rather than "We thank X for helping us with Y", you might vary this as "X helped with Y.".
Added lines 47-53:
In all but extended abstracts, numerical results and simulations should be reported in enough detail that the reader can duplicate the results. This should include all parameters used, indications of the number of samples that contributed to the analysis and any initial conditions, if relevant.

When presenting simulation results, provide insight into the statistical confidence. If at all possible, provide confidence intervals. If there's a "strange" behavior in the graph (e.g., a dip, peak or change in slope), this behavior either needs to be explained or reasons must be given why this is simply due to statistical aberration. In the latter case, gathering more samples is probably advised.

Figures should be chosen wisely. You can never lay out the whole parameter space, so provide insight into which parameters are significant over what range and which ones are less important. It's not very entertaining to present lots of flat or linear lines.

The description of the graph should not just repeat the graphically obvious such as "the delay rises with the load", but explain, for example, how this increase relates to the load increase. Is it linear? Does it follow some well-known other system behaviors such as standard queueing systems?
Added lines 55-76:
*There's no need to enclose numbers in $$ (math mode).
*Use \cite{a,b,c}, not \cite{a} \cite{b} \cite{c}.
*Use the \usepackage{times} option for LaTeX2e - it comes out much nicer on printers with different resolutions. Plus, compared to cmr, it probably squeezes an extra 10% of text out of your conference allotment.
*Non-index (or descriptive) subscripts are set in roman, not italic. For example,
->x_{\mathit index}
->x_{\mathrm max}
->x_{\mathrm f}
->In the examples above, the index i or index is substituted or instantiated by numbers, such as x1, x2.

*Multi-letter variable names should be surrounded by
->, not $$, as otherwise the spacing will be wrong.
*For uniformity, use the LaTeX2e graphics set, not the earlier psfigure set:
->\caption{Some figure}
Changed line 1 from:
'''''[++Writing Technical Articles++]'''''\\
!!!''''+Writing Technical Articles+''''
Changed line 11 from:
'''+Research Papers+'''\\
!!Research Papers
Changed lines 32-33 from:
'''+Paper Structure+'''
!!Paper Structure
Added lines 35-109:

!!! Title
!!! Authors
!!! Abstract
!!! Introduction
!!! Body of Paper
!!! Bibliography
!!! Acknowledgement
!!! Reporting Numerical Results and Simulation
!!! LaTeX Consideratons
!!! Things to Avoid
'''Too much motivational material'''\\

->Three reasons are enough -- and they should be described very briefly.
'''Describing the obvious parts of the result'''\\

->"Obvious" is defined as any result that a graduate of our program would suggest as a solution if you pose the problem that the result solves.
'''Describing unnecessary details'''\\

->A detail is unnecessary, if its omission will not harm the reader's ability to understand the important novel aspects of the result.
'''Spelling errors'''\\

->With the availability of spell checkers, there is no reason to have spelling errors in a manuscript. If you as the author didn't take the time to spell-check your paper, why should the editor or reviewer take the time to read it or trust that your diligence in technical matters is any higher than your diligence in presentation? Note, however, that spell checkers don't catch all common errors, in particular word duplication ("the the"). If in doubt, consult a dictionary such as the (on line) Merriam Webster.\\

'''Text in Arial:'''\\

->Arial and other sans-serif fonts are fine for slides and posters, but are harder to read in continuous text. Use Times Roman or similar serif fonts. Unusual fonts are less likely to be available at the recipient and may cause printing or display problems. Material mainly to be read online typically use sans serif fonts.

!!! Guidelines for Experimental Papers
"Guidelines for Experimental Papers" set forth for researchers submitting articles to the journal, Machine Learning.

#Papers that introduce a new learning "setting" or type of application should justify the relevance and importance of this setting, for example, based on its utility in applications, its appropriateness as a model of human or animal learning, or its importance in addressing fundamental questions in machine learning.
#Papers describing a new algorithm should be clear, precise, and written in a way that allows the reader to compare the algorithm to other algorithms. For example, most learning algorithms can be viewed as optimizing (at least approximately) some measure of performance. A good way to describe a new algorithm is to make this performance measure explicit. Another useful way of describing an algorithm is to define the space of hypotheses that it searches when optimizing the performance measure.
#Papers introducing a new algorithm should conduct experiments comparing it to state-of-the-art algorithms for the same or similar problems. Where possible, performance should also be compared against an absolute standard of ideal performance. Performance should also be compared against a naive standard (e.g., random guessing, guessing the most common class, etc.) as well. Unusual performance criteria should be carefully defined and justified.
#All experiments must include measures of uncertainty of the conclusions. These typically take the form of confidence intervals, statistical tests, or estimates of standard error. Proper experimental methodology should be employed. For example, if "test sets" are used to measure generalization performance, no information from the test set should be available to the learning process.
#Descriptions of the software and data sufficient to replicate the experiments must be included in the paper. Once the paper has appeared in Machine Learning, authors are strongly urged to make the data used in experiments available to other scientists wishing to replicate the experiments. An excellent way to achieve this is to deposit the data sets at the Irvine Repository of Machine Learning Databases. Another good option is to add your data sets to the DELVE benchmark collection at the University of Toronto. For proprietary data sets, authors are encouraged to develop synthetic data sets having the same statistical properties. These synthetic data sets can then be made freely available.
#Conclusions drawn from a series of experimental runs should be clearly stated. Graphical display of experimental data can be very effective. Supporting tables of exact numerical results from experiments should be provided in an appendix.
#Limitations of the algorithm should be described in detail. Interesting cases where an algorithm fails are important in clarifying the range of applicability of an algorithm.
!!! Other Hints and Notes
From Bill Stewart (Slashdot, May 7, 2006), edited

*Write like a newspaper reporter, not a grad student.
*Your objective is clear communication to the reader, not beauty or eruditeness or narration of your discoveries and reasoning process. Don't waste their time, or at least don't waste it up front.
*Hit the important conclusions in the first few sentences so your reader will read them. If you'd like to wrap up with them at the end of your memo, that's fine too, in case anybody's still reading by then, but conclusions come first.
*If you're trying to express something complex, simplify your writing so it doesn't get in the way. For something simple, 10th grade language structures will do, but if it's really hairy stuff, back down to 8th grade or so.
*Think about what your audience knows and doesn't know, and what they want and don't want. Express things in terms of what they know and want, not what you know.

From MarkusQ, Slashdot, May 7, 2006

*Top down design Starting with an outline and working out the details is the normal way of tackling an engineering problem.
*Checking your facts Engineers should be used to checking anything that is even remotely doubtful before committing to it. So should writers.
*Failure mode analysis For each sentence ask yourself, could it be misread? How? What is the best way to fix it?
*Dependency analysis Are the ideas presented in an order that assures that each point can be understood on the basis of the readers assumed knowledge and the information provided by preceding points?
*Optimization Are there any unnecessary parts? Does the structure require the reader to remember to many details at once, before linking them?
*Structured testing If you read what you have written assuming only the knowledge that the reader can be expected to have, does each part work the way you intended? If you read it aloud, does it sound the way you intended?

!!! The conference Review Process
It is hard to generalize the review process for conferences, but most reputable conferences operate according to these basic rules:

The paper is submitted to the technical program chair(s). All current conferences require electronic submission, in PDF, occasionally in Word.

#The technical program chair assigns the paper to one or more technical program committee members, hopefully experts in their field. The identity of this TPC member is kept secret.
#The TPC member usually provides a review, but may also be asked to find between one and three reviewers who are not members of the TPC. They may be colleagues of the reviewer at the same institution, his or her graduate students or somebody listed in the references. The graduate student reviews can be quite helpful, since these reviewers often provide more detailed criticism rather than blanket dismissal. Any good conference will strive to provide at least three reviews, however, since conferences operate under tight deadlines and not all reviewers deliver as promised, it is not uncommon that you receive only two reviews.
#In some conferences, there is an on-line discussion of papers among the reviewers for a particular paper. Usually, a lead TPC member drives the discussion and then recommends the paper for acceptance, rejection or discussion at the TPC meeting.
#The technical program chair then collects the reviews and sorts the papers according to their average review scores.
#The TPC (or, rather, the subset that can make the meeting), then meets in person or by phone conference. Usually, the bottom third and the top third are rejected and accepted, respectively, without (much) further discussion. The papers discussed are those in the middle of the range, or where a TPC member feels strongly that the paper ended up in the wrong bin, or where the review scores differ significantly. Papers that only received two reviews are also often discussed, maybe with a quick review by one of #the TPC members as additional background. The rigor of the TPC meeting depends on the size and reputation of the conference. In some workshops and conferences, the TPC chairs may well make the final decision themselves, without involving the whole TPC.
!!! Other References
!!! Talks
!!! Miscellaneous

!!! contribution
This page contains material provided by Gail Kaiser, Craig Partridge, Sumit Roy, Eric Siegel, Sal Stolfo, Luca Trevisan, Yechiam Yemini, Erez Zadok and João Craveiro.
Added lines 1-36:
'''''[++Writing Technical Articles++]'''''\\
The notes below apply to technical papers in computer science and electrical engineering, with emphasis on papers in systems and networks.

Read Strunk and White, Elements of Style. Again.

Give the paper to somebody else to read. If you can, find two people: one person familiar with the technical matter, another only generally familiar with the area.

Papers can be divided roughly into two categories, namely original research papers and survey papers. There are papers that combine the two elements, but most publication venues either only accept one or the other type or require the author to identify whether the paper should be evaluated as a research contribution or a survey paper. (Most research papers contain a "related work" section that can be considered a survey, but it is usually brief compared to the rest of the paper and only addresses a much narrower slice of the field.)

'''+Research Papers+'''\\
A good research paper has a clear statement of the problem the paper is addressing, the proposed solution(s), and results achieved. It describes clearly what has been done before on the problem, and what is new.

The goal of a paper is to describe novel technical results. There are four types of technical results:

# An algorithm;
# A system construct: such as hardware design, software system, protocol, etc.;\\
One goal of the paper is to ensure that the next person who designs a system like yours doesn't make the same mistakes and takes advantage of some of your best solutions. So make sure that the hard problems (and their solutions) are discussed and the non-obvious mistakes (and how to avoid them) are discussed. (Craig Partridge)

#A performance evaluation: obtained through analyses, simulation or measurements;
#A theory: consisting of a collection of theorems.\\

A paper should focus on

* describing the results in sufficient details to establish their validity;
*identifying the novel aspects of the results, i.e., what new knowledge is reported and what makes it non-obvious;
*identifying the significance of the results: what improvements and impact do they suggest.

'''+Paper Structure+'''

* Typical outline of a paper is:
History - Print - Recent Changes - Search
Page last modified on April 26, 2018, at 09:39 PM EST