Return-Path: Date: Sat, 24 Nov 2001 16:22:00 -0800 (PST) Message-Id: <200111250022.QAA23127@raclette.ICSI.Berkeley.EDU> From: scheler@ICSI.Berkeley.EDU To: connectionists@cs.cmu.edu Subject: Parallel Paper Submission Cc: scheler@ICSI.Berkeley.EDU We would like to suggest the adoption of a general policy in the Neural Network Community of unlimited parallel paper submission. It is the task of editors and reviewers then to accept or reject papers, and the liberty of authors to select the journal where they want to publish. Gabriele Dorothea Scheler Johann Martin Philipp Schumann Date: Mon, 26 Nov 2001 08:17:10 -0800 Message-Id: <4518-Mon26Nov2001081710-0800-tgd@cs.orst.edu> X-Mailer: emacs 20.7.1 (via feedmail 8 I) From: "Thomas G. Dietterich" To: scheler@ICSI.Berkeley.EDU CC: connectionists@cs.cmu.edu In-reply-to: <200111250022.QAA23127@raclette.ICSI.Berkeley.EDU> (scheler@ICSI.Berkeley.EDU) Subject: Re: Parallel Paper Submission Reply-to: tgd@cs.orst.edu References: <200111250022.QAA23127@raclette.ICSI.Berkeley.EDU> One of the most precious resources in the research community is our time. Unlimited parallel submission requires more time spent reviewing papers. This would probably have the effect of reducing the quality of the reviewing and reducing the willingness of referees to agree to donate their time. We need to strike a compromise between the one extreme of allowing authors to "shotgun" their papers to multiple conferences and the other extreme of allowing authors only a single chance to publish a paper. One possible compromise is the current arrangement whereby authors are permitted to submit to multiple conferences/journals but that these submissions must be sequential. This has the added advantage that the paper can be improved with each submission in light of the reviews obtained from the previous submission. -- Thomas G. Dietterich Voice: 541-737-5559 Department of Computer Science FAX: 541-737-3014 Dearborn Hall 102 URL: http://www.cs.orst.edu/~tgd Oregon State University Corvallis, OR 97331-3102 Date: Mon, 26 Nov 2001 12:26:25 -0600 (CST) From: Grace Wahba Message-Id: <200111261826.MAA15415@hera.stat.wisc.edu> To: connectionists@cs.cmu.edu, scheler@ICSI.Berkeley.EDU Subject: Re: Parallel Paper Submission Cc: wahba@stat.wisc.edu There is a downside to the idea below: If everyone submitted their paper to five journals in the hopes of maximizing its acceptance by at least one, this will mean multiplying the amount of refereeing work to be done by up to a factor of 5. Furthermore if an author gets an early acceptance from their second favorite journal and decides to wait to see if their first favorite journal will take it, then the second favorite journal has their publication schedule messed up. Editors and reviewers do not have unlimited free resources to deal with this ... S & S - suggest you bounce this idea off a few editors, as well as people who are maxxed out on refereeing, and see what kind of flack bounces back. ................................... From: scheler@ICSI.Berkeley.EDU To: connectionists@cs.cmu.edu Subject: Parallel Paper Submission We would like to suggest the adoption of a general policy in the Neural Network Community of unlimited parallel paper submission. It is the task of editors and reviewers then to accept or reject papers, and the liberty of authors to select the journal where they want to publish. Gabriele Dorothea Scheler Johann Martin Philipp Schumann ............... X-Authentication-Warning: condor.cnbc.cmu.edu: jlm owned process doing -bs Date: Mon, 26 Nov 2001 17:04:07 -0500 (EST) From: Jay McClelland To: connectionists@cs.cmu.edu MMDF-Warning: Parse error in original version of preceding line at ammon.boltz.cs.cmu.edu Subject: Re: Parallel Paper Submission In-Reply-To: <4518-Mon26Nov2001081710-0800-tgd@cs.orst.edu> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII I wholeheartedly agree with Tom. Parallel submission would create a huge waste of reviewer time, and would lead to many bad feelings if a paper is accepted to two outlets. Obviously the problem with the sequential approach is that review turnaround can be slow. This is an issue that we all can and should work on. -- Jay McClelland X-Authentication-Warning: miris71.mis.atr.co.jp: mlyons owned process doing -bs Date: Tue, 27 Nov 2001 11:50:11 +0900 From: "Michael J. Lyons" X-Sender: mlyons@miris71 To: connectionists@cs.cmu.edu Cc: scheler@ICSI.Berkeley.EDU Subject: Re: Parallel Paper Submission: Separate Refereeing and Editorial processes In-Reply-To: <200111261826.MAA15415@hera.stat.wisc.edu> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII On Mon, 26 Nov 2001, Grace Wahba wrote: > There is a downside to the idea below: If everyone submitted their > paper to five journals in the hopes of maximizing its > acceptance by at least one, this will mean multiplying the amount of > refereeing work to be done by up to a factor of 5. Furthermore > if an author gets an early acceptance from their > second favorite journal and decides to wait to see if > their first favorite journal will take it, then the > second favorite journal has their publication schedule > messed up. Editors and reviewers do not have unlimited free > resources to deal with this ... S & S - suggest you bounce this > idea off a few editors, as well as people who are maxxed out > on refereeing, and see what kind of flack bounces back. It's already not unusual to be asked to review a paper that has been rejected by one journal a second time for another journal. Perhaps a way around this is a complete revision of the publishing model. For example, the reviewing and editorial processes could be separated such that a paper is only reviewed by 2 or 3 referees by a system independent of the editorial process. Maybe referee selection can be partially automated using something like the Citeseer system. Then journal Editors could bid competitively on the pool of reviewed papers. This is just a suggestion, there could be many other possible models of the peer-review publishing process. Is there anything sacred (or optimal) about the system that is currently in place? Cheers, - Michael Lyons -- Michael J. Lyons, PhD Senior Researcher ATR Media Information Science Kyoto, Japan http://www.mic.atr.co.jp/~mlyons X-Originating-IP: [208.212.202.99] From: "M. Imad Khan" To: Jay McClelland , connectionists@cs.cmu.edu MMDF-Warning: Parse error in original version of preceding line at ammon.boltz.cs.cmu.edu References: Subject: Re: Parallel Paper Submission Date: Tue, 27 Nov 2001 18:35:48 +0500 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 5.00.2314.1300 X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2314.1300 Message-ID: X-OriginalArrivalTime: 27 Nov 2001 13:46:48.0125 (UTC) FILETIME=[F5AE66D0:01C17749] What about publishing everything on the web and leave ALL kinds of paper publications. There could be a section where "unpublished" papers are published and the readers can cast votes as to "publish" them in the published section. This way soemthing that is useful but not "published" can be taken care of, while the normal people can act as reviewers and make the status of "unpublished" change to "published" through their vote. Imad Message-ID: <004101c1774d$3db761e0$44235393@ac.upc.es> From: Pau Bofill To: connectionists@cs.cmu.edu MMDF-Warning: Parse error in original version of preceding line at ammon.boltz.cs.cmu.edu References: Subject: RE: Parallel Paper Submission Date: Tue, 27 Nov 2001 15:10:17 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 5.00.2615.200 X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2615.200 Content-Transfer-Encoding: quoted-printable X-MIME-Autoconverted: from 8bit to quoted-printable by roura.ac.upc.es id fARE5xb25002 What about a pool of papers where anyone can submit? Editors/reviewers would then select their preferred papers from the pool... (This is an embryonary approach, to be further improved. Don't take it full value, please, but let's ponder the idea). Pau Bofill Reply-To: peter.andras@ncl.ac.uk MMDF-Warning: Parse error in original version of preceding line at ammon.boltz.cs.cmu.edu From: Peter Andras To: tgd@cs.orst.edu, scheler@ICSI.Berkeley.EDU MMDF-Warning: Parse error in original version of preceding line at ammon.boltz.cs.cmu.edu Cc: connectionists@cs.cmu.edu MMDF-Warning: Parse error in original version of preceding line at ammon.boltz.cs.cmu.edu Subject: RE: Parallel Paper Submission Date: Tue, 27 Nov 2001 14:35:06 -0000 Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 (Normal) X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook IMO, Build 9.0.2416 (9.0.2910.0) In-Reply-To: <4518-Mon26Nov2001081710-0800-tgd@cs.orst.edu> X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2919.6600 Importance: Normal X-ECS-MailScanner: Scanned successfully I think that a better way of handling the publication problem would be to have a publication clearing centre to which many publications would be affiliated. Papers would be send to the clearing centre, and authors would specify the preference list of journals where they wish to publish. The clearing centre would manage the refereeing process, and referees would suggest the place of publication, besides providing comments and recommendations. If the suggested place of publication matches the author's wishes, the paper would be published in that journal, otherwise the author could be consulted whether he accepts the suggested place of publication. The publications and referees would have ranks, depending on their measured scientific impact (impact factor for journals, recent cumulative impact factor and recent citation index for referees). Depending on the ranks of the target journals the referees with appropriate ranks would be assigned to evaluate the submissions. In this way the editorial boards of the journals would be primarily responsible for defining the scientific orientation and limits of the journal, and would release their role in the refereeing process as an editorial body (the members would still provide individual refereeing service to the publication clearing centre). Having enough many journals and referees associated to a publication clearing centre, this would provide an efficient way to publish scientific results much faster than it is currently possible (in average; obviously the established, well-known researchers publish relatively fast even today, but those who are not-well-known, at the begining of their scientific carrier may have to wait long time, until their papers appear in print). Having shorter time delays between first submission and publication, in average, would be very beneficial for the faster advancement of science. I see that the creation of a such clearing centre needs efforts and organization, but I don't see impossible the creation of a publication clearing centre in the near future. Best wishes, Peter Andras ----------------------- Dr Peter Andras Lecturer Neural Systems Group Department of Psychology University of Newcastle upon Tyne Newcastle upon Tyne NE1 7RU UK tel. +44-191-2225790 fax. +44-191-2225622 http://www.staff.ncl.ac.uk/peter.andras/ From: rinkus To: connectionists@cs.cmu.edu MMDF-Warning: Parse error in original version of preceding line at ammon.boltz.cs.cmu.edu Subject: RE: Parallel Paper Submission: Separate Refereeing and Editorial processes Date: Tue, 27 Nov 2001 11:37:06 -0500 Message-ID: <004001c17761$c0941db0$17bcfea9@DBJH8M01> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Priority: 3 (Normal) X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook, Build 10.0.2627 X-MimeOLE: Produced By Microsoft MimeOLE V5.50.4133.2400 Importance: Normal In-Reply-To: If people are genuinely interested in improving the scientific review process you might want to consider making it unacceptable for the graduate students of reviewers to do the actual reviewing. Graduate students are just that...students...and lack the knowledge and wisdom to provide a fair review of novel ideas. In many instances a particular student may have particular knowledge and insight relevant to a particular submission but the proper model here is for the advertised reviewer (i.e., whose name appears on the editorial board of the publication) to consult with the student about the submission (and this should probably be in an indirect fashion so as to protect the author's identity and ideas) and then write the review from scratch himself. The scientific review process is undoubtedly worse off to the extent this kind of accountability is not ensured. We end up seeing far too much rehashing of old ideas and not enough new ideas. Rod Rinkus Message-Id: <4.3.2.20011127101510.00c77460@pop.interchange.ubc.ca> X-Sender: swindale@pop.interchange.ubc.ca X-Mailer: QUALCOMM Windows Eudora Version 4.3 Date: Tue, 27 Nov 2001 10:28:57 -0800 To: connectionists@cs.cmu.edu From: Nicholas Swindale Subject: Parallel Paper Submission Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; format=flowed What about the following idea - allow unlimited parallel submission but shift the burden of obtaining referee reports to the authors. So the authors find 2 or 3 referees likely to be known to journal editors and likely to be impartial, submit the paper directly to the referees, and then when the paper is adequately revised submit it to the journal(s) of their choice and get the referees to send in their reports at the same time. This could be done sequentially or in parallel. This would a) save journal offices a lot of trouble, b) cut down on multiple refereeing and time-consuming resubmissions, c) get rid of anonymous reviewing and d) speed up publication. Nick Swindale __________________ Associate Professor, Dept of Ophthalmology, University of British Columbia, 2550 Willow St., Vancouver BC Canada V5Z 3N9 tel. 604 875 5379 fax. 604 875 4663 e-mail: swindale@interchange.ubc.ca Date: Tue, 27 Nov 2001 11:54:26 -0800 From: John Lazzaro Message-Id: <200111271954.LAA25156@snap.CS.Berkeley.EDU> To: connectionists@cs.cmu.edu Subject: Re: Parallel Paper Submission [...] > Michael Lyons writes > > Perhaps a way around this is a complete revision of the publishing model. We might also look to Hollywood -- the New York Times movie reviewer does not exercise prior restraint to prevent "bad" movies from being made or released. Instead, financial issues -- funding the movie, and distributor buy-in -- dictates whether a movie gets made and released. The role of the newspaper reviewer is to judge movies after release -- and the public is trained to read the opinion of respected reviewers, as part of the decision-making process of which movie to see on a Friday night. Ideas of this sort are under active investigation in the Digital Library community; this abstract gives the flavor of a project here at Berkeley: http://buffy.eecs.berkeley.edu/Seminars/2001/Nov/011105.wilensky.html --jl ------------------------------------------------------------------------- John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro ------------------------------------------------------------------------- Message-ID: <3C043ADC.4090704@cs.cmu.edu> Date: Tue, 27 Nov 2001 20:16:12 -0500 From: Douglas Rohde User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.4) Gecko/20011019 Netscape6/6.2 X-Accept-Language: en-us MIME-Version: 1.0 CC: connectionists@cs.cmu.edu Subject: Re: Parallel Paper Submission References: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit I considered the idea of a submission pool, from which editors would select papers they wished to publish, but decided against suggesting it myself because I think it would seriously compromise the effectiveness of the review process. I think most of us would agree that the review process, however painful for both the author and the reviewer, is nevertheless very useful for identifying flaws in work that should not be published and, perhaps more importantly, for improving good papers. If you submitted a paper to the pool and journal A wanted to accept it with major (and presumably justified) revisions and journal B wanted to accept it with minor ones, would you spend the time to do the revisions, or would you just go ahead and publish it in journal B as is? In order to attract potentially good papers, the journal editors will have incentive to suggest the fewest changes. As a result, the quality of the published papers will diminish. And if you were asked to write a review for a paper from the pool, how much effort would you put in if you knew the author may not even pay attention to your review and could just ignore it and publish elsewhere? In regards to Nicholas Swindale's suggestion, I can't imagine that the "get your own reviewers" model could possibly work. Would we be paying the reviewers as well? How many reviews can we solicit before choosing the ones we want to send in? Finally, I think the quality of the stuff coming out of Hollywood speaks to the viability of the movie reviewer model. Although I do think it would be interesting to have a journal purely devoted to review of and commentary on already published work in a particular field. Doug Rohde Carnegie Mellon University Date: Tue, 27 Nov 2001 22:36:02 -0600 From: rsun@cecs.missouri.edu Message-Id: <200111280436.fAS4a2a24303@ari1.cecs.missouri.edu> To: connectionists@cs.cmu.edu, jlm@cnbc.cmu.edu, macgyver_99@hotmail.com Subject: Re: Parallel Paper Submission Cc: rsun@cecs.missouri.edu >From: "M. Imad Khan" >To: Jay McClelland , connectionists@cs.cmu.edu > >What about publishing everything on the web and leave ALL kinds of paper >publications. There could be a section where "unpublished" papers are >published and the readers can cast votes as to "publish" them in the >published section. This way soemthing that is useful but not "published" can >be taken care of, while the normal people can act as reviewers and make the >status of "unpublished" change to "published" through their vote. > The opinions of experts should weigh a lot more than those of others --- that's the only way to maintain quality and quality is absolutely critical to scientific work. How do we emphasize experts' opinions if we do away with peer reviews? Even with peer reviews, we are constantly having difficulty in determining who the experts are (whenever we go outside of (relatively) clearly delineated narrow areas). --Ron =========================================================================== Prof. Ron Sun http://www.cecs.missouri.edu/~rsun CECS Department phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882 8318 201 Engineering Building West Columbia, MO 65211-2060 email: rsun@cecs.missouri.edu http://www.cecs.missouri.edu/~rsun http://www.cecs.missouri.edu/~rsun/journal.html http://www.elsevier.com/locate/cogsys =========================================================================== User-Agent: Microsoft-Entourage/9.0.1.3108 Date: Wed, 28 Nov 2001 14:42:15 +0900 Subject: Re: Parallel Paper Submission From: Adriaan Tijsseling To: connectionists@cs.cmu.edu MMDF-Warning: Parse error in original version of preceding line at ammon.boltz.cs.cmu.edu Message-ID: In-Reply-To: Mime-version: 1.0 Content-type: text/plain; charset="US-ASCII" Content-transfer-encoding: 7bit > I wholeheartedly agree with Tom. Parallel submission would create > a huge waste of reviewer time, and would lead to many bad feelings > if a paper is accepted to two outlets. Obviously the problem with > the sequential approach is that review turnaround can be slow. This > is an issue that we all can and should work on. An additional problem is that the reviewing process itself is not particularly efficient. How many times does it not occur that reviewers' reports do not agree? Or that one reviewer suggests a modification, which another reviewer actually requests to be removed? An ideal, but certainly attainable option is to have one single online repository for papers, in the same vein as citeseer or cogprints. Researchers can then retrieve the papers they are interested in, read them, and return a score based on relevance, originality, and the like. Perhaps they can submit a more detailed commentary anonymously, visible to the author(s) only. In this age, one should optimally try to benefit from modern internet technologies. Let the academic public decide which articles they deem relevant and useful. This way articles are much faster distributed (the current 1, 2, 3 year delay between writing and publishing is really becoming ridiculous nowadays). Adriaan Tijsseling Message-ID: <3C04D5CE.30503@sunnybains.com> Date: Wed, 28 Nov 2001 12:17:18 +0000 From: Sunny Bains Reply-To: sunny@sunnybains.com Organization: SunnyBains.Com User-Agent: Mozilla/5.0 (Macintosh; U; PPC; en-US; rv:0.9.2) Gecko/20010726 Netscape6/6.1 X-Accept-Language: en,pdf MIME-Version: 1.0 To: connectionists@cs.cmu.edu Subject: Re: Parallel Paper Submission References: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit This may be an entirely off-the-wall comment, but there is a possible solution that be able to be imported from (believe it or not) the vagaries of English property law. It used to be that each potential buyer (the journals) of a house (the paper) would have their own separate survey (review) done for the mortgaging companies. More recently, someone came up with the idea (this is still experimental) of having the buyer have their house surveyed themselves (the surveying profession is independent), and then give this report to any potential buyers. So, you could have a panel of reviewers who review for many journals in the same field. If the author puts a list of all the journals he/she is interested in, in order, then the reviewer can make the appropriate comments... (I have a very narrow experience of paper publishing, so apologies if this is a naive solution... I just thought it might be worth suggesting). Best, Sunny Bains Imperial College of Science and Technology Jay McClelland wrote: > > I wholeheartedly agree with Tom. Parallel submission would create > a huge waste of reviewer time, and would lead to many bad feelings > if a paper is accepted to two outlets. Obviously the problem with > the sequential approach is that review turnaround can be slow. This > is an issue that we all can and should work on. > > -- Jay McClelland > > > From: John Koza To: connectionists@cs.cmu.edu MMDF-Warning: Parse error in original version of preceding line at ammon.boltz.cs.cmu.edu Subject: Parallel Paper Submission Date: Wed, 28 Nov 2001 08:25:30 -0800 Message-ID: <019801c17829$4b96f0a0$050010ac@mendel> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 (Normal) X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook 8.5, Build 4.71.2173.0 X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2106.4 Importance: Normal Hello Connectionists: There is a very simple way to solve the real problems that the proponents of parallel paper submission are trying to address. It does not involve any new complicated machinery (e.g., clearinghouses) and its does not involve (after initial implementation) any more work for already-busy editors and reviewers. A couple of years ago, there was one journal in the field of genetic and evolutionary computation. Like most journals in the computer science field, it had a lengthy review process. Papers languished for months on the editors desk before being sent out for reviewers. Reviewers were typically allowed 6 - 9 months or more to write their reviews. The editors then took many additional months before reaching a decision. Submitting authors were frustrated with having their work "tied up" waiting for a (possibly adverse) decision on whether their paper would be published. This is, of course, particularly significant for new academics who need publications in order to earn tenure. It was almost true that the author had forgotten that he had written the paper by the time it was published. When the IEEE starting considering creation of a new journal in the field of genetic and evolutionary computation, I talked to the editor-designate and convinced him to follow the practice of the biological and medical sciences. The nearly universal practice in the biological and medical sciences is exclusive submission combined with quick review. For example, Science demands that an author submit the paper exclusively to them; however, in exchange, Science promises the author a cursory yes-no decision (based on topic suitability and general appearance) in a couple of weeks and a final decision (after detailed review by peer reviewers) in about 6 weeks. I subscribe personally to over a dozen journals in the biological and medical sciences. They all follow this approach (as do other journals that I read in the library). I would say the overall average time that I see is about 70 - 90 days. In fact, many of them loudly advertise (in ads for their own journal and as part of the author submission instructions) their average review time. I've even seen competitive ads from the journal themselves pointing out their average time versus the average time for "brand X". There is nothing inconsistent about a rapid overall schedule and quality of reviewing or editing. The fact is that it doesn't take the reviewers or the editors any more time or effort to maintain a "biological sciences" kind of schedule than a schedule where reviewers are given 6 - 9 months and editors habitually take another 6 - 9 months to reach a decision. It is simply of a matter of when people spent their time. Having reviewed hundreds of papers, I can say that I read the paper and write the review on only one of two moment --- independent of whether I've been 30, 60, 90, or 180 days to do the rwview. If the paper really grabs my attention, I do it the day or so after I receive it and then just sendmy review in. If not, I do it on the day or so just before the review is due. Later, when the Genetic Programming and Evolvable Machines journal was being created, I also convinced the publisher and editor-designate to follow the same kind of schedule. That was much easier because the IEEE journal had already established and maintained its quick schedule. Over time, the original journal in the field of genetic and evolutionary computation has, because of competitive pressure, moved toward this same kind of schedule. (There is, of course, a one-time transitional effort required to get onto a faster schedule). The fact is, in a rapidly changing field, unnecessarily long publication schedules are a significant disservice to the field. They impede advances in the field because new ideas get out more slowly. They hamper the careers of individual authors. The simple way to move from the current "lose-lose" situation to a "win-win" situation a rapid schedule. Of course, once one journal in a field adopts this kind of common sense schedule, Darwinian natural selection takes over and produces significant competitive pressure on the others. John R. Koza Consulting Professor Biomedical Informatics Department of Medicine Medical School Office Building (MC 5479) Stanford University Stanford, California 94305-5479 Consulting Professor Department of Electrical Engineering School of Engineering Stanford University Phone: 650-941-0336 Fax: 650-941-9430 E-Mail: koza@stanford.edu WWW Home Page: http://www.smi.stanford.edu/people/koza For information about field of genetic programming in general: http://www.genetic-programming.org For information about Genetic Programming Inc.: http://www.genetic-programming.com For information about GECCO-2002 (GP-2002) conference in New York City on July 9 - 13, 2002 (Tuesday - Saturday) and the International Society on Genetic and Evolutionary Computation visit: http://www.isgec.org/ For information about the annual Euro-GP-2002 conference on April 3 - 5, 2002 in Ireland, visit http://evonet.dcs.napier.ac.uk/eurogp2002 Date: Wed, 28 Nov 2001 11:56:57 -0600 From: "John F. Kolen" To: connectionists@cs.cmu.edu Subject: Re: Parallel Paper Submission Message-ID: <20011128115657.F1472@altair.coginst.uwf.edu> References: <4.3.2.20011127101510.00c77460@pop.interchange.ubc.ca> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <4.3.2.20011127101510.00c77460@pop.interchange.ubc.ca>; from swindale@interchange.ubc.ca on Tue, Nov 27, 2001 at 10:28:57AM -0800 Parallel submission offers the author the illusion of saving time and energy by increasing the probability of acceptance with minimal revisions. In my experience as an AE, very few papers are accepted as-is on the first attempt. Flat out rejects were just as rare. Revise and resubmit was the most frequent. Upon resubmission, papers were accepted either as-is or with minor-corrections. Now consider what happens with parallel submission. Essentially, more people look at your paper. Most quality reviews will identify the same deficiencies in the reviewed paper. So rather than have two or three people pointing out your mistakes, you've got five or six...and a couple of journals telling you to revise and resubmit. And it's still a two-step process. Most, if not all, of the suggestions on this thread have made the same assumption. That is, all that matters for acceptance is quality of the paper. This is not the case, especially in a field as diverse as ours. For any two journals, the review critera are different. Journals have target audiences that have certain expectations. Some audiences expect excruciatingly thorough background sections, while others are happy with the formulas and a minimal amount of window dressing. Thus, multiple submissions should entail additional work for each submission to tailor it to the target audience. This assumes, of course, that the author cares about such matters and does not perceive the publication as merely 'a line on the vita'. Finally, if you think that the reviews are unfair, discuss it with the editor. If you think the turnaround time is too long, get involved in the review process and write quality reviews with a quick turnaround. There always seems to be more papers than reviewers. -- John F. Kolen voice: 850.202.4420 Research Scientist fax: 850.202.4440 Institute for Human and Machine Cognition University of West Florida Date: Wed, 28 Nov 2001 09:48:08 -0800 (PST) Message-Id: <200111281748.JAA09061@chalumeau> From: Anand Venkataraman To: connectionists@cs.cmu.edu Subject: Re: Parallel Paper Submission > What about publishing everything on the web and leave ALL kinds of paper > publications. There could be a section where "unpublished" papers are > published and the readers can cast votes as to "publish" them in the > published section. This way soemthing that is useful but not "published" can > be taken care of, while the normal people can act as reviewers and make the > status of "unpublished" change to "published" through their vote. The problem with this is that it is not very clear who gets to vote. Proper and conscientious reviewing of papers by well qualified individuals is not only of service to the community, but also to the authors. It ensures that their papers do not see ink prematurely and become a source of embarrassment later on in their careers. A similar forum as you suggest is already available through the repository at www.arXiv.org where people are invited (and frequently do) post papers and reports that they may not intend to publish. Many publications also cite these arXived papers as though they were actually in print. & To: connectionists@cs.cmu.edu Subject: how to decide what to read Date: Wed, 28 Nov 2001 12:51:45 -0500 From: Geoffrey Hinton Message-Id: <01Nov28.125155edt.453148-19376@jane.cs.toronto.edu> In the old days, there had to be a way to decide what to print because printing and circulation were bottlenecks. But now that we have the web, the problem is clearly how to decide what to read. It is very valuable to have the opinions of people you respect in helping you make this choice. For the last five years, I have been expecting someone to produce software that facilitates the following: On people's homepages, there is some convention for indicating a set of recommended papers which are specified by their URL's. When I read a paper I think is really neat, I add its URL to my recommended list using the fancy software (which also allows me to make comments and ratings available). I also keep a file of the homepages of people who I trust (and how much I trust them) and the fancy software alerts me to new papers that several of them like. Its hard to manipulate the system because people own their own hompages and if they bow to pressure to recommend second-rate stuff written by their adviser, other people will stop relying on them. Obviously, there are many ways to elaborate and improve this basic idea and many potential problems that need to be ironed out. But I think it would be extremely useful to have. Somebody please write this software. Geoff Hinton Message-ID: <3C04B4CD.6090409@idsia.ch> Date: Wed, 28 Nov 2001 10:56:29 +0100 From: Ivo Kwee User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.5) Gecko/20011023 X-Accept-Language: en MIME-Version: 1.0 To: Leslie Pack Kaelbling , connectionists@cs.cmu.edu Subject: referee idea References: <3BFCC2C7.6040406@idsia.ch> <3681911.1006414391@[130.62.67.97]> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Hi, I saw the postings on connectionists about referees. As a matter of fact, I bounced a related idea of open review for JMLR to Leslie just a week ago. I think it is relevant and might be in fact feasible [hope L. doesn't mind me quoting her...]. About L's comment on "sifting": additionally we can keep a personal list of what are the latest reviewed papers of some (important) person, personal "recommendations" (see Amazon), reviewers ranking, global download statistics, online versioning etc. The good thing about "the Amazon way" is that you can also read the reviewers comment themselves, which is mostly not available in most journals but is quite useful (as an exception I think the J. of Am. Statistics does include reviewers comments). Ivo Kwee IDSIA Ivo Kwee wrote: > Leslie, > > Why not make JMRL even more radical? Get away with an Editorial Board > at all? Do a referee system like Amazon.com, publish papers > immediately and let *everyone* act as referee immediately, but also > give points to referees themselves (start the current Board members as > "veterans"). This gives also fair credit to student/researchers that > referee papers on behalf of someone. > > What do you think? > > Ivo > Leslie Pack Kaelbling wrote: > It's an interesting idea. Might be fun to try in parallel with the > usual system (you can start a new journal that works this way!). > > I guess I'm ultimately an elitist. I think that a minority of the > community have better insight, understanding, and taste than the rest, > and that they should decide what gets published. > > Published is probably the wrong word here. In some sense, because of > things like eprint archives, everybody can publish their own work, > which is great and important. So I see the role of journals as really > giving an imprimatur. Some group of people thinks these (few) papers > are good. > > As more and more information becomes available, we'll even pay for > people to sift it for us, and find the good parts. Of course, "good" > to me is "bad" to someone else, and so that other community should > find some other editor to sift out the stuff they like. > > The reason I subscribe to some magazines is that my taste is aligned > with that of the editors. Even if a huge superset of that material > were available online, I'd pay for the (paper or electronic) magazine, > because I don't have time to do the sifting for myself. > > - L > Mime-Version: 1.0 X-Sender: jbower@131.215.25.158 Message-Id: Date: Wed, 28 Nov 2001 19:11:26 -0800 To: Connectionists@cs.cmu.edu From: "James M. Bower" Subject: Fwd: Re: Parallel Paper Submission Content-Type: text/plain; charset="us-ascii" ; format="flowed" > > >The opinions of experts should weigh a lot more than those of others --- >that's the only way to maintain quality and quality is absolutely critical >to scientific work. The deep and serious problem, in my view, is that the review of papers has become more and more political and more and more conservative, as viewed over my now 20 years publishing in the scientific literature. All one has to do is look at the real process of publishing in journals like Science and Nature to see that. Even established investigators regularly complain about a review process that seems to be driven more by who you know than what you know and that regularly rejects papers based on what is in the discussion section, rather than what is in the methods and results. Personally, I increasingly find that graduate students and postdocs are better judges of what is scientifically interesting than are many established faculty tied up in these political loops. I realize there is an inherent contradiction here, because at least in biology, many of the senior faculty I know (including one I am very familiar with) rely heavily on their graduate students and post docs to ghost reviews. So perhaps the problem has to do with the fact that the ghost reviewing students are trying to write the kind of reviews they would imagine their mentors would write. :-) Whatever, in my view there is no question that something is broken, and something needs to be done to fix it. The robustness of the discussions on this mail group speaks to that. I also think that, in the long run, the review that matters is the one that will be provided by the graduate student 100 years from now who no longer understands the politics and will therefore be considering published work on its merits and with the benefit of some hindsight. I would prefer to be able to speak to that student without the political filter currently being applied to way too many of our papers. I am happy to take the risk that I will be judged a fool. Rather that than not be allowed to take risks at all. Jim Bower Mime-Version: 1.0 X-Sender: me@umb.u-strasbg.fr Message-Id: In-Reply-To: References: X-Spam-Reporting: All UCE reported. X-Accept-Language: en, fr, hu X-PGP-DH/DSS: X-PGP-RSA: Date: Thu, 29 Nov 2001 05:49:35 +0100 To: connectionists@cs.cmu.edu, Adriaan Tijsseling From: Michel Eytan Subject: Re: Parallel Paper Submission Content-Type: text/plain; charset="us-ascii" Thus hath held forth Adriaan Tijsseling at 28-11-2001 re Re: Parallel Paper Submission: [snip] > An ideal, but certainly attainable option is to have one single online > repository for papers, in the same vein as citeseer or cogprints. > Researchers can then retrieve the papers they are interested in, read them, > and return a score based on relevance, originality, and the like. Perhaps > they can submit a more detailed commentary anonymously, visible to the > author(s) only. We all know that there is a very serious pb about this: MONEY! The editors will *never* agree to put on the Web for free items that they sell -- and not cheap at that :-((( > In this age, one should optimally try to benefit from modern internet > technologies. Let the academic public decide which articles they deem HOW? The refrereeing process is intended to do precisely that, with the concourse of experts in the field... > relevant and useful. This way articles are much faster distributed (the > current 1, 2, 3 year delay between writing and publishing is really becoming > ridiculous nowadays). > > > Adriaan Tijsseling -- Michel Eytan eytan@dpt-info.u-strasbg.fr I say what I mean and mean what I say Date: Thu, 29 Nov 2001 01:29:27 -0600 From: "John F. Kolen" To: connectionists@cs.cmu.edu Subject: Re: Parallel Paper Submission Message-ID: <20011129012927.M1472@altair.coginst.uwf.edu> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: ; from adriaan@tijsseling.com on Wed, Nov 28, 2001 at 02:42:15PM +0900 On Wed, Nov 28, 2001 at 02:42:15PM +0900, Adriaan Tijsseling wrote: > An additional problem is that the reviewing process itself is not > particularly efficient. How many times does it not occur that reviewers' > reports do not agree? Or that one reviewer suggests a modification, which > another reviewer actually requests to be removed? > I think you're talking about consistency here, not efficiency. The editor, not the reviewers, is the final judge in such situations. > An ideal, but certainly attainable option is to have one single online > repository for papers, in the same vein as citeseer or cogprints. > Researchers can then retrieve the papers they are interested in, read them, > and return a score based on relevance, originality, and the like. Perhaps > they can submit a more detailed commentary anonymously, visible to the > author(s) only. > > In this age, one should optimally try to benefit from modern internet > technologies. Let the academic public decide which articles they deem > relevant and useful. This way articles are much faster distributed (the > current 1, 2, 3 year delay between writing and publishing is really becoming > ridiculous nowadays). The big time sinks are collecting reviews and then getting the paper to press. The former is a necessary evil, the magic stamp of approval. The latter, however, could be dispensed with by personally providing access to electronic versions that index engines, such as CiteSeer, can latch onto. And don't we already decide which articles are relevant and useful? Does it really matter if the information comes from a peer-reviewed journal or self-published technical report? Neuroprose was full of such documents. The real issue, IMHO, has nothing to do with disseminating information. Want everyone to have your paper? Just post it, don't bother submitting it to a journal. The true problem is the apparent value of a published article to those committies and administrators outside our field. -- John F. Kolen voice: 850.202.4420 Research Scientist fax: 850.202.4440 Institute for Human and Machine Cognition University of West Florida Date: Thu, 29 Nov 2001 13:48:39 -0500 (EST) From: Lee Giles To: Connectionists cc: giles@ist.psu.edu Subject: Re: Parallel Paper Submission In-Reply-To: <019801c17829$4b96f0a0$050010ac@mendel> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Readers of this thread might find this paper "Online or Invisible? "recently published in Nature of interest. http://www.neci.nec.com/~lawrence/papers/online-nature01/ Steve Lawrence presents evidence that online papers are 5 times more likely to be cited than those not online. Lee Giles -- C. Lee Giles, David Reese Professor, School of Information Sciences and Technology Professor, Computer Science and Engineering The Pennsylvania State University 001 Thomas Bldg, University Park, PA, 16802, USA giles@ist.psu.edu - 814 865 7884 http://ist.psu.edu/giles From: "Prof. Michael Stiber" MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <15366.63764.111258.27729@kasei.bothell.washington.edu> Date: Thu, 29 Nov 2001 19:12:20 -0800 To: connectionists@cs.cmu.edu Subject: Re: reviewing In-Reply-To: <01Nov28.125155edt.453148-19376@jane.cs.toronto.edu> References: <01Nov28.125155edt.453148-19376@jane.cs.toronto.edu> X-Mailer: VM 6.92 under 21.1 (patch 3) "Acadia" XEmacs Lucid Reply-To: stiber@u.washington.edu Geoffrey Hinton writes: > > In the old days, there had to be a way to decide what to print because > printing and circulation were bottlenecks. But now that we have the > web, the problem is clearly how to decide what to read. It is very This is the basis of one of the recurring themes of arguments over "modernizing" the publication review process. Suggestions usually revolve around letting something like market forces decide which papers are really most significant. This is based on the assumption that the only motivation for the current review system is filtering, and that we can all save ourselves some time by using technology to speed the filtering process (perhaps by distributing it among a large number of independently-operating processors, each doing a much smaller amount of work than current reviewers do :^)). For many (most?) people publishing, however, peer review serves another purpose: it provides an ongoing, relatively objective performance review. This is done incrementally, so the review is already distributed along time and among a number of people. When someone is asked to review a package of information about a colleague for promotion or tenure, for example, it is perfectly reasonable to rely on past reviews --- publication record --- to give a broad overview of _one aspect_ of performance. Lacking that, I suppose a conscientious reviewer would be faced with the enormous task of carefully reading a good fraction of the candidate's work. The same would be true, if less extensive, for annual performance reviews. In effect, all that time "saved" in streamlining the paper review process would just pop up elsewhere. Mike Stiber -- Prof. Michael Stiber stiber@u.washington.edu Computing and Software Systems http://faculty.washington.edu/stiber University of Washington, Bothell tel: +1-425-352-5280 Box 358534, 18115 Campus Way NE fax: +1-425-352-5216 Bothell, WA 98011-8246 USA Message-ID: <3C064F41.4A22FB2E@yha.att.ne.jp> Date: Fri, 30 Nov 2001 00:07:45 +0900 From: Sam Joseph X-Mailer: Mozilla 4.73 [ja]C-CCK-MCD BDPjm-Sony3 (Windows NT 5.0; U) X-Accept-Language: ja MIME-Version: 1.0 To: Geoffrey Hinton , connectionists Subject: Re: how to decide what to read References: <01Nov28.125155edt.453148-19376@jane.cs.toronto.edu> Content-Type: text/plain; charset=iso-2022-jp Content-Transfer-Encoding: 7bit This is exactly the sort of thing that the NeuroGrid project is trying to achieve. http://www.neurogrid.net/ I won't claim that we have the perfect solution just yet, but we're working towards a system where you can easily publish meta-data about whatever documents you care to. Over time the system learns who you go to for which information, automatically adjusting future recommendations on this basis. e.g. I often read and download pages related to "Fly Fishing"that were marked up by John, so my NeuroGrid node learns to go to John first when I search for things related to "Fly Fishing". NeuroGrid is completely open source, the code is all in SourceForge, which means that it 1. is still under development 2 will take some time before it is a turn key solution I believe that integrating trust and learning into this sort of distributed system is essential and that ultimately we will all benefit from the approach, whether NeuroGrid is the actual system we end up using or not. CHEERS> SAM Geoffrey Hinton wrote: > In the old days, there had to be a way to decide what to print because > printing and circulation were bottlenecks. But now that we have the > web, the problem is clearly how to decide what to read. It is very > valuable to have the opinions of people you respect in helping you > make this choice. For the last five years, I have been expecting > someone to produce software that facilitates the following: > > On people's homepages, there is some convention for indicating a set > of recommended papers which are specified by their URL's. When I read > a paper I think is really neat, I add its URL to my recommended list > using the fancy software (which also allows me to make comments and > ratings available). > > I also keep a file of the homepages of people who I trust (and how > much I trust them) and the fancy software alerts me to new papers that > several of them like. > > Its hard to manipulate the system because people own their own > hompages and if they bow to pressure to recommend second-rate stuff > written by their adviser, other people will stop relying on them. > > Obviously, there are many ways to elaborate and improve this basic > idea and many potential problems that need to be ironed out. > But I think it would be extremely useful to have. > Somebody please write this software. > > Geoff Hinton To: connectionists@cs.cmu.edu Subject: Re: Parallel Paper Submission In-Reply-To: Your message of "Tue, 27 Nov 2001 22:36:02 CST." <200111280436.fAS4a2a24303@ari1.cecs.missouri.edu> Date: Thu, 29 Nov 2001 14:23:02 +0000 From: James Hammerton Message-Id: <20011129142303.555687B5D@omega.tardis.ed.ac.uk> Hi, I've followed this discussion with some interest. It seems to me the main problem people are trying to address is the amount of time taken in the reviewing process. In my opinion, the way things work now is fine, the long times taken by the review process aside. Speeding up the review process isn't so difficult though if things are handled electronically. I'm one of the editors for a special issue of the JMLR, and we managed to get the reviewing process done and notifications sent out inside 3 months. The deadline for papers was 2nd September, we gave a 5th November deadline for reviews to be sent back to us and we planned notifications for the 16th November, but we slipped back a week on that timetable due to some late reviews. The JMLR normally gives reviewers 6 weeks to return their reviews and gets a relatively quick turnover as a result. I don't see why more journals can't operate like this -- even if the final publication is in print rather than on the web. I don't think the idea put forward for authors choosing their own reviewers is a good one -- there's a conflict of interests there. It seems to me that the ideas for pools of reviewers to whom papers get submitted is interesting but may be too complicated in practice. I'm not sure there is any pressing need to change from the current model, as opposed to finding ways to speed it up (e.g. by handling things electronically and using tight review schedules). James Hammerton From: Network Editor MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <15366.21071.74022.793376@gargle.gargle.HOWL> Date: Thu, 29 Nov 2001 15:20:47 +0000 To: comp-neuro@neuroinf.org, connectionists@cs.cmu.edu, SMBnet@smb.org, comp-bio@net.bio.net, neur-sci@net.bio.net Subject: NETWORK: Computation in Neural Systems X-Mailer: VM 6.92 under 21.1 (patch 12) "Channel Islands" XEmacs Lucid Please see below the contents for the current issue of NETWORK: Computation in Neural Systems. NETWORK is always striving to reduce the time taken in processing submitted papers. Here are some figures for the past year: - For accepted papers, our median receipt-to-final decision time was 157 days. - For rejected papers, our median receipt-to-final decision time was 137 days. - Note that for accepted papers, the median receipt-to-FIRST decision time was 94 days. The difference of 63 days was in most part due to authors carrying out revisions required prior to publication. NETWORK is available electronically (http://www.iop.org/journals/ne). For non-subscribers the current issue is freely accessible (http://www.iop.org/free2001). User-Agent: Microsoft-Entourage/9.0.1.3108 Date: Fri, 30 Nov 2001 20:38:47 +0900 Subject: Re: Parallel Paper Submission From: Adriaan Tijsseling To: Connectionists Message-ID: In-Reply-To: Mime-version: 1.0 Content-type: text/plain; charset="US-ASCII" Content-transfer-encoding: 7bit > http://www.neci.nec.com/~lawrence/papers/online-nature01/ > > Steve Lawrence presents evidence that online papers are 5 times > more likely to be cited than those not online. Amen to that! Which was basically my point as well when I suggested a central repository for papers. Some assumed I was talking about journals making paper submission an online thing. But in fact, why are we still going for paper? Research would speed up much more if work could be published online in a searchable and widely accessible database. Would that cost money? None at all. All that is needed is a fast server and a reliable host. Researchers who have read a paper in that database can submit a review or score, provided they are subscribed (for free of course) to that kind of service. Over time, any paper would be accumulating reviews and scores, much to the benefit of the author. After all, it would not be restricted to one or two or three reviewers, but to anyone interested in the paper. Of course, as someone pointed out to me, this may mean some papers might never be reviewed (But then, one could actively request a review). These are just loose ideas, but I really feel the time is there to make away with tedious reviewing, editing, and publishing and instead use what is already there: internet. Adriaan Tijsseling From: "Prof. Michael Stiber" MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <15368.1465.955083.919049@kasei.bothell.washington.edu> Date: Fri, 30 Nov 2001 14:18:33 -0800 To: Connectionists Subject: Re: Parallel Paper Submission In-Reply-To: References: X-Mailer: VM 6.92 under 21.1 (patch 3) "Acadia" XEmacs Lucid Reply-To: stiber@u.washington.edu Adriaan Tijsseling writes: > > Researchers who have read a paper in that database can submit a review or > score, provided they are subscribed (for free of course) to that kind of > service. Over time, any paper would be accumulating reviews and scores, much > to the benefit of the author. After all, it would not be restricted to one > or two or three reviewers, but to anyone interested in the paper. Of course, > as someone pointed out to me, this may mean some papers might never be > reviewed (But then, one could actively request a review). > > These are just loose ideas, but I really feel the time is there to make away > with tedious reviewing, editing, and publishing and instead use what is > already there: internet. Actually, the scoring system you mention is also already there: ISI Web of Science (the electronic version of the Science Citation Index). In this case, scores are the number of times that a paper has been cited, with links to the citing works (which in effect review the paper by using its results). Unfortunately, it isn't public domain. Mike Stiber -- Prof. Michael Stiber stiber@u.washington.edu Computing and Software Systems http://faculty.washington.edu/stiber University of Washington, Bothell tel: +1-425-352-5280 Box 358534, 18115 Campus Way NE fax: +1-425-352-5216 Bothell, WA 98011-8246 USA From: Nando de Freitas To: connectionists@cs.cmu.edu Cc: nando@ubc.cs.ca Message-ID: Date: Sat, 01 Dec 2001 10:28:41 -0800 X-Mailer: Netscape Webmail MIME-Version: 1.0 Content-Language: en Subject: Parallel Paper Submission X-Accept-Language: en Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: 7bit Dear connectionists Some concerns: 1) Is our goal to publish as much as we can? Or is it to advance science and technology? May be less time writing and reviewing would leave us with more time for contributing to higher goals. 2) As someone who just entered the tenure track process, I honestly don't feel any pressure to write lots of papers - I do feel pressure to carry out good research. Is this a Berkeley/UBC phenomenon only? I suspect not. Where does this not hold? May be this is what needs to be fixed. 3) Whatever we do, let us remember that in most fields of science we encounter papers that weren't recognised for the first, say 50, years of existence and subsequently had a significant impact on science. How do we deal with this? 4) I love journals like "Journal of the Royal Statistical Society - B" because many of the papers include reviews at the end. It turns out that some of the reviews are very critical and really good. I often find myself reading the reviews before reading the paper! Of course, since the reviews get published and CITED, people make an effort to be constructive, soundly critical and not make fools of themselves. This is a great model - slow but good. Cheers! Nando Message-Id: <1.5.4.32.20011202161655.01ba4e1c@unipg.it> X-Sender: sfr@unipg.it X-Mailer: Windows Eudora Light Version 1.5.4 (32) Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Date: Sun, 02 Dec 2001 17:16:55 +0100 To: Connectionists From: "Simone G.O. Fiori (Pg)" Subject: Re: Parallel Paper Submission A problem that deserves attention deals with the interdisciplinary nature of connectionism, which plays a non negligible role in the determination of the difficulties related to the reviewing process. Connectionists come in fact from diverse research areas, ranging e.g. from electrical engineering to psychology, from neurobiology to mathematics and physics, and so on. This often makes papers result in a cross-fertilization of several research branches and, as a matter of fact, makes them difficult to be read from reviewers that do not possess such wide knowledge. Published broad-area papers are interesting to the Readers as well as specialized papers, but they may make severe difficulties arise in the review phase. In my opinion, part of the delay in the review processes arises when an Editor (or action or associate Editor) faces the problem of assigning a paper a proper set of reviewers: It is not infrequent that, after long time, the people simply return the papers unreviewed reporting they are unable to make any useful comments or reporting some sections, e.g. theoretical ones, appear unaccessible. This simply means that reviewers are not in late wrt the review deadline, but they provide a null report. This creates troubles to the Editors who, when this happens, usually take one of these two possible decisions: 1) Simply reject the paper suggesting the Author to submit it to another more suitable journal, or 2) Try to assign a new set of reviewers in the hope to have better luck, re-starting in fact the whole process again. In this situation, by taking a negative decision the Editor implicitly assumes the Author is responsible for the bad outcome -- and this might be not so wrong, actually -- while the second choice burdens the Editor's office or the Editor him/her-self and leave the Author the feeling that an embarrassing, unjustified, long review time is being taken for his/her paper, because he/she is unaware of the difficulties the hidden people are encountering. As someone else has already suggested, a possible solution to this problem is a semi-blind review process, where any Author can suggest a list of handy potential reviewers for the submitted paper; the Author knows they are potentially able to read and comment on the paper, and the longer the list an Editor can count on, the smaller the knowledge an Author has about to whom the paper will be actually sent for review to. To be realistic, I think that if we want our papers to be read by people that actually know the topic, the "conflict of interests" is intrinsic... but this is physiological to our scientific life. About the reviewers, I don't see drawbacks in asking PhD students or post-docs to perform reviews, provided that this is intended in the right way: This could be ultimately a good exercise for them -- striving to comment on an academic valuable paper, or to detect and comment on the weakness of a scientific proposal -- and a good source of high-level notes and observations for Authors. A pool of PhD students or post-docs (such as room-mates) with some research experience, can sometime exceed the knowledge-spread and knowledge-diversity of a single person. My last note concerns a ground-level proposal that I ask the opinion of colleagues on: Some conferences and journals have started managing submissions and reviews by email or, even better, by dedicated web-pages; I can report the great examples of the IEEE Trans. on Antennas and Propagation, the IEEE Trans. on Circuits and Systems - Part II, and Neural Processing Letters, just to cite three; they allow to submit electronic versions of the papers and the reviews electronically, without the need of printing, sending stuff by snail-mail, faxing, etc. with a non-negligible gain of time (and postage saving, of course...). I would suggest journals definitely move to electronic paper submission and review. All the best, Simon. =================================================== Dr Simone Fiori (EE, PhD)- Assistant Professor Dept. of Industrial Engineering (IED) University of Perugia (UNIPG) Via Pentima bassa, 21 - 05100 TERNI (Italy) eMail: sfr@unipg.it - Fax: +39 0744 492925 Web: http://www.unipg.it/~sfr/ =================================================== From: Bernadette Garner Message-Id: <200112030359.fB33x8w19102@nexus.csse.monash.edu.au> Subject: Re: Parallel Paper Submission To: Nando de Freitas Date: Mon, 3 Dec 2001 14:59:08 +1100 (EST) Cc: connectionists@cs.cmu.edu, nando@ubc.cs.ca In-Reply-To: from "Nando de Freitas" at Dec 01, 2001 10:28:41 AM X-Mailer: ELM [version 2.5 PL2] MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit > 4) I love journals like "Journal of the Royal Statistical Society - B" > because many of the papers include reviews at the end. It turns out > that some of the reviews are very critical and really good. I often > find myself reading the reviews before reading the paper! Of course, > since the reviews get published and CITED, people make an effort to be > constructive, soundly critical and not make fools of themselves. This > is a great model - slow but good. I think this is a good idea. It will cut down the number of terrible reviews (where the reviewer didn't have a clue). But I am wondering if it could prevent people actually reading papers if they read the reviews first. I know people who won't see movies if the reviews are bad, and that may not be fair because occassionally editors/reviewers have their own agendas. Bernadette Garner Date: Mon, 3 Dec 2001 12:37:33 +0000 (GMT) From: Bob Damper To: rinkus cc: connectionists@cs.cmu.edu Subject: RE: Parallel Paper Submission: Separate Refereeing and Editorial processes In-Reply-To: <004001c17761$c0941db0$17bcfea9@DBJH8M01> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII .. but it's not uncommon for journals to send submissions straight out to graduate students, short-circuiting the advisor/supervisor. Sometimes, graduate students will ask the advice of their supervisor about how to approach the review, but not always. The reason students get asked to do such an important task when they ``lack the knowledge and wisdom to provide a fair review of novel ideas'', to use Rob's words, is that journals are generally struggling to get enough reviewers. Editors and editorial assistants don't always know who is who in the field, especially if the journal has a wide remit. If a grad student has recently published something relevant and it comes to the attention of an editor seeking a reviewer, then they become fair game. This shortage of good qualified referees is going to continue all the time there is no tangible reward (other than a warm altruistic feeling) for the onerous task of reviewing. So, as many others have pointed out, parallel submissions will exacerbate this situation rather than improve it. Not a good idea! Bob. On Tue, 27 Nov 2001, rinkus wrote: > > > If people are genuinely interested in improving the scientific review > process you might want to consider making it unacceptable for the > graduate students of reviewers to do the actual reviewing. Graduate > students are just that...students...and lack the knowledge and wisdom to > provide a fair review of novel ideas. > > In many instances a particular student may have particular knowledge and > insight relevant to a particular submission but the proper model here is > for the advertised reviewer (i.e., whose name appears on the editorial > board of the publication) to consult with the student about the > submission (and this should probably be in an indirect fashion so as to > protect the author's identity and ideas) and then write the review from > scratch himself. The scientific review process is undoubtedly worse off > to the extent this kind of accountability is not ensured. We end up > seeing far too much rehashing of old ideas and not enough new ideas. > > Rod Rinkus > > > > Date: Mon, 3 Dec 2001 18:15:43 -0800 (PST) Message-Id: <200112040215.SAA09737@stockholm> From: Anand Venkataraman To: connectionists@cs.cmu.edu In-reply-to: <200112030359.fB33x8w19102@nexus.csse.monash.edu.au> (message from Bernadette Garner on Mon, 3 Dec 2001 14:59:08 +1100 (EST)) Subject: Re: Parallel Paper Submission >> 4) I love journals like "Journal of the Royal Statistical Society - B" >> because many of the papers include reviews at the end. It turns out >> that some of the reviews are very critical and really good. I often >> find myself reading the reviews before reading the paper! Of course, >> since the reviews get published and CITED, people make an effort to be >> constructive, soundly critical and not make fools of themselves. This >> is a great model - slow but good. > > I think this is a good idea. It will cut down the number of terrible I too think this is a fantastic idea. It simultaneously solves two problems -- that of reviewer "remuneration" and that of "malicious/bad reviews". The problem, however, is the loss upon publication of anonymity of the reviewer. But why would a reviewer want to remain anonymous unless he/she gave in a malicious review? In my own case at least, I have wished on one occasion that one of four reviews a paper of mine received got published with the reviewer's name on it. I have also wished on more than one occasion that the author of a paper I had reviewed knew my identity when reading my review. The only other issue I see here is that "reviews written to be published" and those written to "improve the paper" tend to be quite different in character. But I guess this is a simple matter to address. The reviewer can simply be requested to relook at the final submission with instructions not to suggest more changes, but rather to submit the final review for publication. Isn't this how the JRSS handles it? & Message-Id: <200112032239.OAA04183@hpp-ss10-4.Stanford.EDU> Subject: Re: Parallel Paper Submission: Separate Refereeing and Editorial To: Bob Damper Date: Mon, 3 Dec 2001 14:39:49 -0800 (PST) Cc: rinkus , connectionists@cs.cmu.edu In-Reply-To: from "Bob Damper" at Dec 03, 2001 12:37:33 PM From: Karl Pfleger Reply-To: Karl Pfleger X-Mailer: ELM [version 2.5 PL2] MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit From a free market economics point of view, I think this discussion has finally come around to the crux of the biggest problem with the present reviewing system: not enough personal incentive for reviewers to do a good job (or to do it quickly for that matter). While the discussion started out being mostly about speed to publication, the issue of review quality has persistently resurfaced. And while many people have proposed some sort of free-market solution to letting papers compete, what I think would be more helpful would be turning some economic and motivational scrutiny on the reviews themselves. Reviewing is hard, and it should be rewarded, and the reward should ideally be somewhat proportional to quality. Right now the reward is mostly altruism and personal pride in doing good work. There is a little bit of reputation involved in that the editors and sometimes the other reviewers see the results and can attach a name to them, but this is a weak reward signal because of how narrow the audience is. The economic currency of academia is reputation. (There was a short column or article about this somewhere, maybe Science, but I don't remember.) The major motivation for doing good papers in the first place is the affect it has on your reputation. (These papers are part of your "professional voice" as Phil Agre's Networking on the Network document puts it.) This in turn affects funding, job hunting, tenure decisions, etc. so there is plenty of motivation to do it well. It would be nice if there were to create a stronger incentive (reward signal) for review quality. This is not too absurd as it seems only a slight jump away from similar standard practices. Part of a review is quality assesment, but tied in with that is advice on how to improve the work. Advice in some other contexts is amply rewarded in reputational currency. Advisors are partly judged by the accomplishments of students that they have advised. People who give advice on how to improve a paper are often mentioned in an acknowledgements section. Often the job they do is very similar to that of a reviewer, it just isn't coordinated by an editor. Sometimes such people become co-authors and then they get the full benefit of reputational reward for their efforts. Even anonymous reviewers are thanked in acknowledgements sections though their reputations are not aided by this. Sometimes the line between the contributions of a reviewer and an author are somewhat blurry. Many people probably know of examples where a particularly helpful anonymous reviewer contributed more to a paper than someone who was, due to some obligation, listed as a coauthor. But many reviews are quite unhelpful or are way off on the quality assessment. It would improve the quality more consitently if the reviewer got some academic reputational currency out of doing good reviews (and corresponding potential to look foolish for being very wrong). How best to change the structure of the reviewing system to accomplish this is an open question. Someone mentioned a journal where reviews are published with the articles. This has some benefit, but has some problems. Reviews for articles that are completely rejected are not published. We don't want people to only agree to review articles they think will get published. Also, while publishing reviews gives a little incentive not to screw up, to fully motivate quality such things would have to become regularly scrutinized in tenure and job decisions as an integral part of the overall publication record. But the field would have to be careful to separate out the quality of the review from the quality and fame of the reviewed material itself, again to not encourage jockeying to review only the papers that look to be the most influential. Clearly I don't have all the answers, but I advocate looking at the problem in terms of economic incentives, in the same way that economists look at other incentive systems such as incentive stock options for corporate employees, which serve a useful purpose but have well-understood drawbacks from an incentive perspective. Note that review quality is a somewhat separate issue than the also important filtering and attention selection issue, such as the software that Geoff Hinton requested. Even a perfect personalized selection mechanism would not completely replace the benefits of a reviewing system. For example, reviews still help authors to improve their work, and thereby the entire field. And realistically no such perfect selection mechanism will ever exist, so selection will always be greatly aided by quality improvement and filtering at the source side. Thus we should be interested in structural mechanisms to improve the quality of reviews (as well as in useful selection mechanisms to tell us what to read). -Karl ------------------------------------------------------------------------------- Karl Pfleger kpfleger@cs.stanford.edu www-cs-students.stanford.edu/~kpfleger/ ------------------------------------------------------------------------------- > From: Bob Damper > > This shortage of good qualified referees is going to continue all the > time there is no tangible reward (other than a warm altruistic feeling) > for the onerous task of reviewing. So, as many others have pointed > out, parallel submissions will exacerbate this situation rather than > improve it. Not a good idea! > > Bob. > > On Tue, 27 Nov 2001, rinkus wrote: > > > > In many instances a particular student may have particular knowledge and > > insight relevant to a particular submission but the proper model here is > > for the advertised reviewer (i.e., whose name appears on the editorial > > board of the publication) to consult with the student about the > > submission (and this should probably be in an indirect fashion so as to > > protect the author's identity and ideas) and then write the review from > > scratch himself. The scientific review process is undoubtedly worse off > > to the extent this kind of accountability is not ensured. We end up > > seeing far too much rehashing of old ideas and not enough new ideas. > > > > Rod Rinkus > > Message-ID: <3C0B779A.42D571@matavnet.hu> Date: Mon, 03 Dec 2001 14:01:15 +0100 From: "LORINCZ, Andras" X-Mailer: Mozilla 4.75 [en] (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: connectionists@cs.cmu.edu Subject: "parallel submission" -- software References: Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Information distributing software with ACCESS CONTROL is available. If you wish to solve the original problem of Gabriele Dorothea Scheler and Johann Martin Philipp Schumann, you need to decide ONLY about access control at connectionists mailing list. Connectionists mail list serves as an advertisement place for technical reportss and papers. anyway. It is then a good idea to start parallel submission at this single point. There is not too much controversy in this statement. Here is an initiating suggestion, which may need to be polished/ironed/confronted. The author uploads his/her paper to to connectionists. Notification goes to everybody who has subsrciption. Uploading and notification are unmoderated. (One can set a filter his/her email not to accept mails from connectionists with subject 'new paper'.) The paper is cached at connectionists and becomes available for downloading. Anybody can make a review of the paper. Reviews are automatically linked to the paper. Reviews are secretive -- the reviewer has an ssh-like communication with connectionists -- and there is a public part of his code. The list and "top acknowledged reviewers" together can reveal the names of "top acknowledged reviewers". If the opinion of the reviewer is considered by the author then he/she can write a revised version of the paper. During uploading this revised version he/she is supposed to acknowledge the reviewer's public code. This is clearly in sake of the author -- provided that he/she would like to promote the reviewer. In turn, works which need improvments and are improved by the reviewer will serve as the basis of selection. If a reviewer is acknowledged, then this reviewer receives a credit (impact) point (factor). There is a ranking of reviewers according to their impact factors. There is a list of the top $n$ most acknowledged reviewers. The names of these $n$ reviewers can be discovered for the public. This is a decision of the reviewer if he/she belongs to this top. These acknowledged reviewers decide (vote) if a paper becomes 'accepted' or not. A paper can be accepted without acknowledgment, for example, if it is perfect. Acceptance means qualification. Acceptance may also mean the opening a forum for discussion about the paper -- which is open reviewing written by people (alike to discussions at BBS). Open reviewing happens through connectionists -- this will be made by another notification list. Top $N>n$ acknowledged reviewers have the right for open reviewing. Their names are provided. In turn, $N$ acknowledged reviewers may be known to the public and $n$ top acknowledged reviewers may vote. Any journal can accept the paper. If an editorial board of a journal accepts the paper then it is a question to the author whether he would like to give the copyright to the journal or not -- he/she might be waiting for a better journal, or, alternatively, -- he/she might have submitted the paper to a journal at the very beginning and might have given the copyright to that journal to start with. If copyright is given to a journal, it should be noted for connectionists. It is the journals' problem how to deal with this challenge. The experienced shift of the editorial board of MLJ to JMLR provides a feeling about the possible outcome. Regards, Andras Lorincz http://people.inf.elte.hu/lorincz P.S. Anyone could build this software. There are freeware solutions, such as 'mailman'. We have also built one with intelligent search options. It has been thoroughly tested for Windows Explorer, but would not support Netscape. Any organization might decide to write/set up/buy a similar software. This seems to be a most probable step in the near future. In this case we shall experience a selective process similar to the evolution of electronic markets: Lots of attempts will start and only a few will survive. So, get started! P.P.S. I have put a paper onto the web. It is closely related to this topic It will appear in the Special Issue of IJFCS (International Journal of Foundations of Computer Science) on Mining the Web Title: "New Generation of the World Wide Web: Anticipating the birth of the 'hostess' race" http://people.inf.elte.hu/lorincz/ParallelSubmission/Lorincz_et_al_Intelligent_Crawler_revised.zip The paper is in a WinZipped postscript file. P.P.P.S. I like the idea of parallel submission. I have the feeling that some reviewers are negligent, may be lacking time, may be students (and lacking knowledge) of authorities on the field, and may be biased agaynszt non-nateave-Inglish-spieking autorz. :-) Date: Tue, 4 Dec 2001 12:26:28 +0000 (GMT) From: Mike Titterington Reply-To: Mike Titterington Subject: Re: Parallel Paper Submission To: connectionists@cs.cmu.edu MIME-Version: 1.0 Content-Type: TEXT/plain; charset=us-ascii Content-MD5: 7PJJGOv1qBO8RngcaOVdbg== X-Mailer: dtmail 1.3.0 CDE Version 1.3 SunOS 5.7 sun4m sparc Message-Id: ------------- Begin Forwarded Message ------------- >> 4) I love journals like "Journal of the Royal Statistical Society - B" >> because many of the papers include reviews at the end. It turns out >> that some of the reviews are very critical and really good. I often >> find myself reading the reviews before reading the paper! Of course, >> since the reviews get published and CITED, people make an effort to be >> constructive, soundly critical and not make fools of themselves. This >> is a great model - slow but good. > > I think this is a good idea. It will cut down the number of terrible > The only other issue I see here is that "reviews written to be published" > and those written to "improve the paper" tend to be quite different in > character. But I guess this is a simple matter to address. The reviewer > can simply be requested to relook at the final submission with instructions > not to suggest more changes, but rather to submit the final review for > publication. Isn't this how the JRSS handles it? ------------- End Forwarded Message ------------- I think that it is worth clarifying the JRSS practice. It is not really true that the journal publishes referees' reviews of papers. What happens is that the RSS holds about 10 meetings per year at which certain papers are 'read' and discussed. Versions of the verbal discussions and any written contributions sent in soon after the meeting are lightly edited and are printed, followed by a rejoinder from the authors of the paper. It is very likely that some of the discussants are people who acted as referees, and possibly some of the points made in the referee reports are reiterated in the discussion, but not necessarily. Anyone at all is at liberty to submit a discussion contribution, whether or not they have reviewed the paper. These discussion papers are carefully selected with a view to their being likely to stimulate a lively discussion, as well as being 'scientifically important'. I'd agree with Nando that the discussion can be at least as interesting and stimulating as the paper itself! Maybe I can add one or two points, from the point of view of an editor. 1. I can't imagine coping with parallel submissions. Handling x incoming submissions per year is bad enough. The thought of 4x or even 2x is frightening. I have to side with Grace's original reaction to the proposal! 2. It is hard to envisage any easy alternative to the present system. It is important to have a strong, conscientious and knowledgeable group of associate editors, whose joint expertise covers the journal's range and who can express a cogent opinion on any paper they are sent; this means they either can act, in effect, as the sole referee (a practice that helps to speed things up) or can adjudicate reliably if multiple referees provide conflicting reports. 3. The issue of rewarding referees is difficult, although I believe some journals offer free issues as 'payment'. I think there's more to it than pure altruism. If one wants one's own work to be efficiently and promptly reviewed then it seems fair to repay this by contributing some time to refereeing other people's work. The journal I'm involved with does print an annual list of referees, as an acknowledgement, and this sort of practice does provide some small public recognition. Mike Titterington. =================================================================== D.M. Titterington, Department of Statistics, University of Glasgow, Glasgow G12 8QQ, Scotland, UK. mike@stats.gla.ac.uk Tel (44)-141-330-5022 Fax (44)-141-330-4814 http://www.stats.gla.ac.uk/~mike Mime-Version: 1.0 X-Sender: jbower@131.215.25.158 Message-Id: Date: Tue, 4 Dec 2001 09:39:00 -0800 To: connectionists@cs.cmu.edu From: "James M. Bower" Subject: improving the review process Content-Type: text/plain; charset="us-ascii" ; format="flowed" I am currently writing a book on the state of modern biological research, comparing that state to the development of physics in the 16th and 17th centuries. In the book I am using examples from paper and grant reviews we have received to support the proposition that biology is essentially a folkloric pre-paradigmatic science that needs to develop a sold, quantitative foundation to move forward as a real science. For that reason, I have spent quite a bit of time recently looking through old reviews of our papers. The remarkable thing about those reviews is that there is typically very little criticism of the methods or results sections, but instead the focus is almost always on the discussion. My favorite quote from one of our reviews (and in fact, the source for the title of the forthcoming book), is "I have no more concerns about the methods or results, but I am deeply concerned about what would happen if a graduate student read the discussion". Accordingly, I think that the quality and usefulness of the review process would be greatly improved if the discussion section was excluded, and not even sent to reviewers. In my view, the discussion section should provide an author free reign to consider the implications of their work in their own words, unfettered by what is all to often a kind of thought censorship or, in effect, demand for patronage. Professional expertise is necessary to assure that a paper has no methodological flaws, and that the results are not overstated or overdrawn. But the discussion is the reward that an author should get for having pulled off the former two. How much more interesting and revealing would the scientific literature be if authors felt free to express their real opinions, and heavens forbid, even speculate once in a while? I should mention one other theme in the book that is relevant to much of this discussion. "Modern" scientific journal publishing was actually invented in the 17th century as a means of providing general communication between a new age of physicists. (it is also believed that Newton was interested in controlling who said what). The important point for this discussion is that a 10 page paper is sufficient space to describe a new approach to understanding planetary motion, but it is not, in my opinion, even close to sufficient to present a theory appropriate for understanding biology. Just at the Transactions of the Royal Society promoted the development of a common quantitative base for physics, a new form of publication is now necessary to establish such a base for biology and other complex systems. On that - stay tuned.... Jim Bower -- *************************************** James M. Bower Ph.D. Research Imaging Center University of Texas Health Science Center - San Antonio and Cajal Neuroscience Research Center University of Texas - San Antonio (626) 791-9615 (626) 791-9797 FAX (626) 484-3918 (cell worldwide) Temporary address for correspondence: 110 Taos Rd. Altadena, CA. 91001 WWW addresses for: laboratory (temp) http://www.bbb.caltech.edu/bowerlab GENESIS (temp) http://www.bbb.caltech.edu/GENESIS From: Rudolf Jaksa MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <15374.11341.734421.130097@lada.aid.kyushu-id.ac.jp> Date: Wed, 5 Dec 2001 15:16:45 +0100 To: connectionists@cs.cmu.edu Subject: publishing model X-Mailer: VM 6.96 under Emacs 21.1.1 Sender: Rudolf Jaksa I'm thinking how to apply free software development (publishing) model to scientific publishing... Present scientific publishing: 1. Article (for instance paper.pdf) is sent to journal or conference for review (it is in camera ready form). 2. If author will get some feedback from reviewers, she may improve this article. 3. Article is printed on paper (and presented in talk). 4. Other people may read this article and use ideas from it in their future work. Free software model applied to scientific publishing: 1. All the data useful for further work on problem are published in single "package". Instead of only camera ready paper, also source code for algorithms, pictures, sample data etc. are published. This "package" is displayed somewhere on the internet. 2. Availability of this work is announced in established mailing lists or internet forums. 3. Other people may download this "article" or read it directly on internet. 4. They also may download it, incorporate parts of it in their own future work, or publish improved version of this article. They may send their comments to the author and she can incorporate them into next version of article ("package"). Good thing about this model is that it is yet proved that it works, however it may not work for "scientific articles". But in my opinion working on paper or on program code is very similar. And many people seems think that free software itself is inspired by scientific publishing... I like on this model (as opposite to current scientific publishing model) also: * Less money are wasted in publishing process, and it means that more people are "allowed" to read article. And more people are allowed to publish too. * Continuation of work is better as they may be several versions of single "article". This is opposite to several articles spread across different journals and proceedings. * Exchange of ideas can by much faster. The loop author-reviewer-readers-author can be reduced to few days if the work is actually "hot topic". * More data and more types of data can be published, publishing is not restricted to 10 pages of text. Actually I know about one book published this way and few program packages with papers included, but I think this free software publishing approach may be more useful for scientific community. I can even imagine this as primary publishing method in science... R.Jaksa