OIDA International Journal of Sustainable Development
Open access peer-reviewed journal
Referee Guide: The Reviewing Criteria – Please provide if you considered other criteria
Comments and critical suggestions on the content and structure of this referee format are most welcome. Please email Neville Hewage, firstname.lastname@example.org
This is an important role. You belong to a community of scholars, educators and practitioners who provide critical and constructive feedback on the work to their peers. Referees will be credited as Associate Editors for the volume of the OIDA International Journal of Sustainable Development in which they have contributed (although, of course, the particular papers they refereed will not be identified).
Your review and comments will be shared with authours in order to improve quality of the publication.
The Role of the Referee
Please observe carefully the following guidelines on the role of the referee.
Papers are not always sent to a referee whose field is identical to the subject matter of that paper. You don’t have to be precisely qualified in a field to be a constructive referee. In fact, an excellent paper will speak beyond its narrowly defined field. If, however, a paper is so distant from your field that you do not feel qualified to judge its merits, please return it to the publishing manager for the journal, who will locate another referee.
Referees receive unpublished work, which must be treated as confidential until published. They should destroy all electronic and printed copies of the draft paper and referee report once they have received confirmation that their reports have been received by the publishing manager (in case we can’t open the report files you send us) Referees must not disclose to others which papers they have refereed; nor are they to share those papers with any other person.
3. Conflict of Interest
Referees must declare any conflict of interest or any other factor which may affect their independence—in cases for instance, where they have received a paper of a colleague or an intellectual opponent. In cases of conflict of interest, please notify the publishing manager of your inability to referee a particular paper.
4. Intellectual Merit
A paper must be judged on its intellectual merits alone. Personal criticism or criticism based solely on the political or social views of the referee, is not acceptable.
5. Full Explanation
Critical or negative judgments must be fully supported by detailed reference to evidence from the paper under review or other relevant sources.
6. Plagiarism and Copyright
If a referee considers that a paper may contain plagiarism or that it might breach another party’s copyright, they should notify the publishing manager for the journal, providing the relevant citations to support their claim.
Referees are asked to return their reports within two weeks. This assists us to provide rapid feedback to the author.
The refereeing process for publication in the Journal is a rigorous measure of the quality of content. Authors are expected to revise to the standards required of the more negative of the referee reports they receive. For instance, if one referee recommends ‘resubmit with major revisions’ and another ‘resubmit with minor revisions’, the author is expected to resubmit with major revisions.
Please evaluate using percentages (%). Eg: 20%, 80%, 90%, 92.8% etc. , Yes/No, Not Applicable (N/A). If necessary may add additional comments on each section and final section of the report.
Manuscripts must be clearly and concisely written in English. The Editors/Reviewers reserve the right to reject without review those that cannot adequately be assessed because of a poor standard of English. Authors whose first language is not English are encouraged to have their manuscript checked by a native English speaker.
Decision Making Process
There is often not a sharp separation between the role of editor and the role of the reviewer. Where reviewers disagree, for example, it may fall to the editor to make a judgement in the light of conflicting advice.
The meaning of peer review can be extended to include not only pre-publication decision-making but also post-publication assessment. For example, in the social sciences in recent years there has been a great deal of interest in the extent to which reported statistical findings can be replicated by other researchers. With the growing availability of electronic databases, original data can now be made available to other researchers to provide a check and assessment of published findings. Similarly, review articles surveying a particular sub-field may well make a contribution to the assessment of a particular piece of work.
Peer review is practised in many different ways by journals in humanities and social sciences. Differences cover such matters as pre-screening before sending to referees, participation by editors themselves in the process of assessment, who makes the final decision and whether authors remain anonymous.
The research process follows three stages (Maxwell and Delaney, 2004): conceptual, methodological, and statistical. Science conveys a conceptual world (linked to theories and research hypothesis) and an empirical world (linked to observations and data). The connection between these two worlds is achieved by the method (linked to the hypothesis and able to obtain data that can contrast them). Combining this idea, along with the ideas developed in the American Psychological Association (APA) publication manual (2001), and the empirical review on the reviewing process of Beyer et al. (1995) and Gilliland and Cortina (1997), it can be concluded that theoretical components (introduction and discussion), experimental design (the method section) and the results section are essential for the review process.
The validity theory, recently reviewed by Shadish, Cook, and Campbell (2001), indicates that research conclusions can be seriously questioned by a series of validity threats. The value of the first part of the paper, included in the conceptual stage, relies on the validity of construct, that is, the reasons that may produce incorrect inferences on the construct explored in the study. Here we include problems with the definition of the construct, or with empirical definitions linked to the construct. The value of the design, included in the methodological level, depends on two types of validity: internal (why the inferences on the effect of a given independent variable can be incorrect?), and external (how the inferences can be generalized across populations, contexts, etc.?). The control of relevant variables and the sampling of research units are the most important factors of this stage. The value of the two last sections, results analysis and discussion, should be based on the statistical validity (why inferences from statistical analysis can be incorrect?). Problems with the analysis assumptions (e.g., homogeneity of variances, sphericity, etc.) may increase the probability of Type I error particularly when increasing the number of contrasts. Designing experiments is a two-stage activity: structural (statistical design) and strategic (manipulation and control of variables). These stages are connected by the plan of the research, where the problem under investigation is stated explicitly (Ramos et al., 2004).
Some journals that evaluated their editorial policy (Beyer et al., 1995) have concluded that relevance-originality-novelty, technical and conceptual quality, and the suitability for the journal are the principal edges of a good reviewing process. In a similar approach, for Gilliland and Cortina (1997) the main edges were: design, method adequacy, theoretical and statistical quality, background literature, construct development, and writing style. In agreement with these ideas, we have reordered the items in our clusters and added new items: writing style and suitability.
Also, we think that new experimental techniques (for example, Internet-based experiments) open a huge number of research opportunities, but also a great number of validity threats, so, we have added some items in agreement with the proposals of Birnbaum (2000) on Internet-based experimentation, and Shadish (2002) on field experiments. Finally, in a recent revision, items were reorganized following three criteria: (1) the suggestions of reviewers who had used these items and (2) those of typical educational guide oriented to doctorate and post-graduate students.
A final report is a collection of recommendations and criteria for guiding manuscript writing and evaluation. The guide is composed of eight clusters: a) General, b) Literature antecedents and research rationale, c) Theoretical development, d) Experimental Design, e) Results, f) Interpretation of the results, g) Manuscript writing, h) Documentation sources.
Most items are written as an affirmative question, in which a parenthetical statement clarifies the evaluated content. Also, most items can be answered in a YES/fashion. Almost all items are useful for evaluating experimental and experimental research, so we will discriminate between them only if necessary. If a question is not applicable; states as N/A.
Additional Important Notes:
Three types of criteria are included in each cluster.
(a) First, the items with one asterisk “ * ” (basic items) have to be fulfilled completely in order to accept the paper.
(b) Second, the items with “+” symbol have to be used when evaluating the technical quality of the manuscript; special attention must be paid to the design and statistical analysis.
(C) Third, the items without marks are complementary and can be used for evaluating the general quality of the manuscript.
Only papers with a high mark in the three types of criteria should be considered for publication.
I look forward to receiving your review.
Neville Hewage, Ph.D.
OIDA IJSD Journal