6 Comments

I had lunch with Benoit Mandelbrot a few years before his death. It was quite fun. I had an observation about fractals applied to the way mRNA is translated and gene replication contributing to evolution, as in the famous example of feathers. That was great. But Mandelbrot was still bitter that his most seminal papers were rejected by the top journals and had to be published in minor journals.

I agree with you about the "gray journals". I had an unfortunate experience with that. I submitted an invited paper to a new open-access journal been started by a professor in Montana, Nicholas Burgis. It was a good experience of review. https://www.omicsonline.org/security-in-a-goldfish-bowl-the-nsabbs-exacerbation-of-the-bioterrorism-threat-2157-2526.S3-013.php?aid=11953

This paper could never have gotten published in the "proper journal" because every academic associated with it is offended at what I have to say about their singular contribution to biosecurity, which is to shout in the ears of our enemies precisely what we do not want them to know. (Their only saving grace is that they are usually quite wrong about what is the most dangerous information.) Sadly, now those who want to ignore what I have said can grandly dismiss it as an article in a "predatory journal" pay to publish.

I'll also offer a criticism, and observe that this article, like much I read on the problem, makes an implicit presumption that is, in my experience, false. That assumption is that reviewers are competent and sensible as a general rule. In my experience this is sometimes true, and in my career it is more so when submitting from a university. But I could show you reviews that are beyond appalling. I have one fairly recent one from a subfield of biology that is outraged to have fundamental assumptions questioned. That review rejected the concept of using mathematics in biology, citing Mario Livio's book "The Golden Ratio" as proof, calling it, "numerology" that nobody could take seriously. This same review mocked a datasource because it was bilingual in Chinese and English, ignoring that each referenced paper had been located, read, and the figure(s) verified or not.

Reality is that many with academic positions are, in the words of a director I discussed some issues with, "Just stupid." The smart ones spend little time on reviews. And of course, the problem of assigning grad students is well known, though such reviews are unlikely to be abusive. I have even have one I am quite certain was using first year undergrads to "review" material. This problem of over-promoted academics is a conundrum but I think it is getting worse, and with ChatGPT the problem may become overwhelming. In corruption, the bad drives out the good, and the corrupt are bound to help each other far more than the honest. I saw this operate up close in 2 labs when I was in grad school.

Expand full comment

I like this article very much, having become horrified at what junk gets published even in top tier journals.

Unfortunately this junk often makes it to the coalface of actual psychological practice, polluting the therapeutic environment and making a mockery of the claim of “evidence basis”.

Expand full comment

I wonder how these results (the ones reported in PNAS) stack up with our bio-related journal findings of which rigor criteria were used by which % of authors in a journal (Menke et al 2020, 2022)? This is not a "you missed a citation post, but a genuine curiosity about this approach vs ours to measure reproducibility.

If the PNAS paper is widely divergent from the journal scores, I would question what the validity of any of these measures is.

Expand full comment
Mar 11, 2023·edited Mar 11, 2023

This article seem like wrote by a sore loser, cannot publish anything in any decent journals .... So he/she/it wrote an halfass pointless article to trash talk all the publishers.

Tldr: POINTLESS article with a sore loser as an author

Expand full comment

Kind of a pointless article..

Expand full comment