From The Atlantic
Here’s what’s next.
The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.
The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.
The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.
Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.
What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”
Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)
The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.
Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself.
Bret Victor
Strogatz admired Victor’s design. He later told me that it was a shame that in mathematics it’s been a tradition for hundreds of years to make papers as formal and austere as possible, often suppressing the very visual aids that mathematicians use to make their discoveries.
Strogatz studies nonlinear dynamics and chaos, systems that get into sync or self-organize: fireflies flashing, metronomes ticking, heart cells firing electrical impulses. The key is that these systems go through cycles, which Strogatz visualizes as dots running around circles: When a dot comes back to the place where it started—that’s a firefly flashing or a heart cell firing. “For about 25 years now I’ve been making little computer animations of dots running around circles, with colors indicating their frequency,” he said. “The red are the slow guys, the purple are the fast guys … I have these colored dots swirling around on my computer. I do this all day long,” he said. “I can see patterns much more readily in colored dots running, moving on the screen than I can in looking at 500 simultaneous time series. I don’t see stuff very well like that. Because it’s not what it really looks like … What I’m studying is something dynamic. So the representation should be dynamic.”
Software is a dynamic medium; paper isn’t. When you think in those terms it does seem strange that research like Strogatz’s, the study of dynamical systems, is so often being shared on paper, without the benefit of his little swirling dots—because it’s the swirling dots that helped him to see what he saw, and that might help the reader see it too.
This is, of course, the whole problem of scientific communication in a nutshell: Scientific results today are as often as not found with the help of computers. That’s because the ideas are complex, dynamic, hard to grab ahold of in your mind’s eye. And yet by far the most popular tool we have for communicating these results is the PDF—literally a simulation of a piece of paper. Maybe we can do better.
Stephen Wolfram published his first scientific paper when he was 15. He had published 10 when he finished his undergraduate career, and by the time he was 20, in 1980, he’d finished his Ph.D. in particle physics from the California Institute of Technology. His secret weapon was his embrace of the computer at a time when most serious scientists thought computational work was beneath them. “By that point, I think I was the world’s largest user of computer algebra,” he said in a talk. “It was so neat, because I could just compute all this stuff so easily. I used to have fun putting incredibly ornate formulas in my physics papers.”
As his research grew more ambitious, he found himself pushing existing software to its limit. He’d have to use half a dozen programming tools in the course of a single project. “A lot of my time was spent gluing all this stuff together,” he said. “What I decided was that I should try to build a single system that would just do all the stuff I wanted to do—and that I could expect to keep growing forever.” Instead of continuing as an academic, Wolfram decided to start a company, Wolfram Research, to build the perfect computing environment for scientists. A headline in the April 18, 1988, edition of Forbes pronounced: “Physics Whiz Goes Into Biz.”
At the heart of Mathematica, as the company’s flagship product became known, is a “notebook” where you type commands on one line and see the results on the next. Type “1/6 + 2/5” and it’ll give you “17/30.” Ask it to factor a polynomial and it will comply. Mathematica can do calculus, number theory, geometry, algebra. But it also has functions that can calculate how chemicals will react, or filter genomic data. It has in its knowledge base nearly every painting in Rembrandt’s oeuvre and can give you a scatterplot of his color palette over time. It has a model of orbital mechanics built in and can tell you how far an F/A-18 Hornet will glide if its engines cut out at 32,000 feet. A Mathematica notebook is less a record of the user’s calculations than a transcript of their conversation with a polymathic oracle. Wolfram calls carefully authored Mathematica notebooks “computational essays.”
The notebook interface was the brainchild of Theodore Gray, who was inspired while working with an old Apple code editor. Where most programming environments either had you run code one line at a time, or all at once as a big blob, the Apple editor let you highlight any part of your code and run just that part. Gray brought the same basic concept to Mathematica, with help refining the design from none other than Steve Jobs. The notebook is designed to turn scientific programming into an interactive exercise, where individual commands were tweaked and rerun, perhaps dozens or hundreds of times, as the author learned from the results of their little computational experiments, and came to a more intimate understanding of their data.
“It’s incalculable, literally … how much is lost, and how much time is wasted.”
What made Mathematica’s notebook especially suited to the task was its ability to generate plots, pictures, and beautiful mathematical formulas, and to have this output respond dynamically to changes in the code. In Mathematica you can input a voice recording, run complex mathematical filters over the audio, and visualize the resulting sound wave; just by mousing through and adjusting parameters, you can warp the wave, discovering which filters work best by playing around. Mathematica’s ability to fluidly handle so many different kinds of computation in a single, simple interface is the result, Gray says, of “literally man-centuries of work.”
The vision driving that work, reiterated like gospel by Wolfram in his many lectures, blog posts, screencasts, and press releases, is not merely to make a good piece of software, but to create an inflection point in the enterprise of science itself. In the mid-1600s, Gottfried Leibniz devised a notation for integrals and derivatives (the familiar ∫ and dx/dt) that made difficult ideas in calculus almost mechanical. Leibniz developed the sense that a similar notation applied more broadly could create an “algebra of thought.” Since then, logicians and linguists have lusted after a universal language that would eliminate ambiguity and turn complex problem-solving of all kinds into a kind of calculus.
Wolfram’s career has been an ongoing effort to vacuum up the world’s knowledge into Mathematica, and later, to make it accessible via Wolfram Alpha, the company’s “computational knowledge engine” that powers many of Siri and Alexa’s question-answering abilities. It is Wolfram’s own attempt to create an Interlingua, a programming language equally understandable by humans and machines, an algebra of everything.
It is a characteristically grandiose ambition. In the 1990s, Wolfram would occasionally tease in public comments that at the same time he was building his company, he was quietly working on a revolutionary science project, years in the making. Anticipation built. And then, finally, the thing itself arrived: a gargantuan book, about as wide as a cinder block and nearly as heavy, with a title for the ages—A New Kind of Science.
It turned out to be a detailed study, carried out in Mathematica notebooks, of the surprisingly complex patterns generated by simple computational processes—called cellular automata—both for their own sake and as a way of understanding how simple rules can give rise to complex phenomena in nature, like a tornado or the pattern on a mollusk shell. These explorations, which Wolfram published without peer review, came bundled with reminders, every few pages, about how important they were.
The more of Wolfram you encounter, the more this seems to be his nature. The 1988 Forbes profile about him tried to get to the root of it: “In the words of Harry Woolf, the former director of the prestigious Institute for Advanced Study in [Princeton, New Jersey]—where Wolfram, at 23, was one of the youngest senior researchers ever—he has ‘a cultivated difficulty of character added to an intrinsic sense of loneliness, isolation, and uniqueness.’”
When one of Wolfram’s research assistants announced a significant mathematical discovery at a conference, which was a core part of A New Kind of Science, Wolfram threatened to sue the hosts if they published it. “You won’t find any serious research group that would let a junior researcher tell what the senior researcher is doing,” he said at the time. Wolfram’s massive book was panned by academics for being derivative of other work and yet stingy with attribution. “He insinuates that he is largely responsible for basic ideas that have been central dogma in complex systems theory for 20 years,” a fellow researcher told the Times Higher Education in 2002.
Wolfram’s self-aggrandizement is especially vexing because it seems unnecessary. His achievements speak for themselves—if only he’d let them. Mathematica was a success almost as soon as it launched. Users were hungry for it; at universities, the program soon became as ubiquitous as Microsoft Word. Wolfram, in turn, used the steady revenue to hire more engineers and subject-matter experts, feeding more and more information to his insatiable program. Today Mathematica knows about the anatomy of a foot and the laws of physics; it knows about music, the taxonomy of coniferous trees, and the major battles of World War I. Wolfram himself helped teach the program an archaic Greek notation for numbers.
All of this knowledge is “computable”: If you wanted, you could set “x” to be the location of the Battle of the Somme and “y” the daily precipitation, in 1916, within a 30-mile radius of that point, and use Mathematica to see whether World War I fighting was more or less deadly in the rain.
“I’ve noticed an interesting trend,” Wolfram wrote in a blog post. “Pick any field X, from archeology to zoology. There either is now a ‘computational X’ or there soon will be. And it’s widely viewed as the future of the field.” As practitioners in those fields become more literate with computation, Wolfram argues, they’ll vastly expand the range of what’s discoverable. The Mathematica notebook could be an accelerant for science because it could spawn a new kind of thinking. “The place where it really gets exciting,” he says, “is where you have the same transition that happened in the 1600s when people started to be able to read math notation. It becomes a form of communication which has the incredibly important extra piece that you can actually run it, too.”
The idea is that a “paper” of this sort would have all the dynamism Strogatz and Victor wanted—interactive diagrams interleaved within the text—with the added benefit that all the code generating those diagrams, and the data behind them, would be right there for the reader to see and play with. “Frankly, when you do something that is a nice clean Wolfram-language thing in a notebook, there’s no bullshit there. It is what it is, it does what it does. You don’t get to fudge your data,” Wolfram says.
To write a paper in a Mathematica notebook is to reveal your results and methods at the same time; the published paper and the work that begot it. Which shouldn’t just make it easier for readers to understand what you did—it should make it easier for them to replicate it (or not). With millions of scientists worldwide producing incremental contributions, the only way to have those contributions add up to something significant is if others can reliably build on them. “That’s what having science presented as computational essays can achieve,” Wolfram said.
HT/PeterL

The R langude has the same capabilities in R markdown and R notebooks
Its actually very cool. Whatever you write becomes and executeable document.
So you take your actual code, code that downloads the data, code than analyzes the data, code that creates the charts, and you embed it in your paper. That paper then consist of two kinds of text:
normal text where where you explain your results, and other text–the code–, that can be re executed as
the user wishes
The science paper actual DOES the work in front of your eyes.
The proper forum for a paper is online, with comments allowed. Although many times I do not understand an article here, I can get some understanding and see who is having the better argument from the points made in the comment thread.
A large problem I see with communicating results via electronic media versus written papers is longevity. Today’s computer software/graphics will likely be obsolete in 20 or so years and the results using them may not even be readable by whatever new software comes along. Written results are forever. I can go to a library and read the old papers from hundreds of years ago. Much of the programming and data I did back in the “old” days was on floppy disks or tapes, which I can no longer access. Heck, I can’t even access some of the data and graphing I did 10 years ago. I have submitted papers to online journals. Who knows if those journals will be around in 10 years.
This is truly a point worth remembering.
Now you’ve said it, I agree.
And wish I had realised it myself.
Which cookbook has the tastiest meals? I seriously doubt a computer program is going to be correct.
“Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. ”
Boy, if computers are generating data what are all the field scientists doing? The author does not seem to distinguish hard earned data from a model output.
In any case , this piece is just a PR piece to make Wolfram an idol. However, he, by his own admission a puffed up pilferer, a plagiarizer and a bully. I simply refer to poor academic who tried to have a word of his own at a conference. We dont know him now, probably was sacked and on Lithium pills. His product clearly now appropriates from everywhere and owns all the tears, sweat and efforts of all workers. Through inclusion but not hardworking Mr Wolfram includes in his shitty SW the works of thousands of dedicated people who created art, science and culture. And without paying a dime to them or their descendants.
History of science and even art are full of these bullies, who gets all the credit for others work. Just to name a few, Einstein (who remembers Mdm Maric, his wife), Brecht (who remembers his harem, Mdm Berlau for instance), and say Oppenheimer (who remembers General Groves).
Scientific publications, regardless of what this piece says, by definition are and should be publications. Not powerpoint slides. Words have meaning, not fancy bullet points and instagram pictures. Let us see if they can write a 20 pages of manuscript to describe a finding that consumed at least 3 years of their life. No, they simply pillage it from the its rightful owners.
Whoever wrote this is dumber than sh!t.
OMG… what a waste of electrons this article is …
Very interesting
“They depend on chains of computer programs that generate data,” 🙂
Sorry, but that explains everything.
The Scientific Paper is Obsolete
Agree. The progressive-leftists have abused the once-decent process so much & for so long, they have pretty much destroyed it. The “scientific paper” can no longer be trusted.
The person who does the peer review should always be named. If the paper is hogwash, he or she should be discredited as well as the journal that publishes the review. This would bring a massive improvement and promotion of real science.
I agree. And similarly, if I was president, I would make every reporter who asks me a question to begin his question with “My name is so-and-so and I work for such-and-such media organization”.
I would agree. I would also suggest looking at who is funding the research. This often gives a good indication of potential bias in the results. Sometimes the conclusions are written by those supporting the “research” which may or may not be supported by the data. I would never impune the integrity of a researcher in general but eating regularly and living in-doors is nice.
I think the scientific paper in computer interactive format is a great idea, if the computer language is transparent. For years, I used MathCad, which started as just a scientific word processor. But with the incorporation of the Maple engine, it also became a powerful computer algebra tool. The one thing it wasn’t, for a very long time, was something that would print all of the pretty equations worth a darn, and that was the original point. Over the years, MathSoft improved it to the point where it fulfilled all of my expectations – and then they sold it to PTC, who promptly destroyed it completely. They have slowly built back its capability in every area except graphics (which suck, big time), and as a bonus have begun to charge exorbitant annual license fees.
I checked out Mathematica, and while I love the power, it’s more than I need for engineering. Plus, it’s very difficult to master. I finally settled on Maple 2018, which has extraordinary power, and is relatively easy to master. Plus, it is reasonably priced for a piece of software of its capability. It has the capability to produce interactive scientific papers (and that’s one of it’s selling points), with just the same appeal as Mathematica’s. I’m not trying to hawk one over the other, but will say that there are many options for this approach to publishing, and I consider it a good thing.
Oh boy. Another example of “I like how this looks so it must be better”…
Scientific papers have only one purpose. To convey information accurately so that others can read, analyze, understand, and respond. Just because some form is trendy, does not make it better.
Simple is Best.
Apply Occam’s Razor.
1 Mass, charge, magnetic moment, spin, energy, isospin and angular momentum are due to one: spin.
2 There are equal numbers of particles and antiparticles, particles being the spinors that make up what we now call particles.
3 There are antiprotons in nuclei.
4 Alpha is 3 protons and 1 antiproton. (2 neutrons are p/p-.)
5 Basic potential between spinors is hbar*delta_v/r.
6 Gravity: ‘scattering ‘ of potential (or mass rays) by nuclei.
7 Nuclear geometry: everything fits in 0.853fm*A^(1/3) space.
8 Nuclear potential : p/p- attraction, p/p or p-/p- repulsion, gravity and rotational energy.
9 Charge: delta_v = c/137 (see 5)
10 Relativity: sum of spinors between two protons moving PLUS sum over sphere around test proton, using 5.
11 Relative mass: due to incomplete sampling of field by spinor on sphere due to acceleration.
12 There’s an extra spinor with v ~c/6 in the fringe of a lone antiproton for odd-A nuclei.
I am a millennial college student and I disagree with this. The real advantage of scientific paper is that it puts a lot of information in one read through. When writing a research paper for class in is easier to differentiate bull from legitimate studies reading a paper than clicking through multiple powerpoints. I know that scientists hide behind their jargon but often times their jargon gives them away.
I spent a year learning Mathematica, only to forget it by the time I actually needed it.
I’ve done a lot of C and awk, have done 3D in POVRay and currently openSCAD. Trying to learn Blender.
gnuplot for plots. No good solution for spinors moving on a sphere. (Python, but I don’t know it.)
WordPress and html5 for website. I’m not very good at it.
So I stumble along.
The peer reviewed printed and bound journal idea was a cost management system admittedly at the expense of a free flow of ideas. It’s persistence across information sharing technology likely reveals the size, influence, and power of the journal industry. Also a cost saving measure is how the review process itself has been compromised. Here is an example of what the idea of peer review was.
The peer review of Callendar 1938, the mother of all climate change papers.
https://tambonthongchai.com/2018/06/29/peer-review-comments-on-callendar-1938/
Its
“it was a shame that in mathematics it’s been a tradition for hundreds of years to make papers as formal and austere as possible, often suppressing the very visual aids that mathematicians use to make their discoveries.”
It started with Descartes with his analytic geometry. He transformed geometrical diagrams into algebraic equations:
Circle
x^2 + y^2 = a^2
Ellipse
x^2 / a^2 + y^2 / b^2 = 1
Parabola
y^2 = 4 a x
Hyperbola
x^2 / a^2 – y^2 / b^2 = 1
Newton used geometrical diagrams in his Principia Mathematica. But Lagrange used equations in his Analytical Mechanics (a.k.a. Lagrangian mechanics) He boasted he gave lectures in mechanics without drawing a single diagram.
Euclid drew geometrical diagrams. Hilbert replaced the diagrams with pure axioms and mathematical logic. He boasted you could replace the lines and points in geometry with tables and chairs and his axioms would still be true.
Call me old-fashioned but I like drawing diagrams. As an engineering undergrad, I remember my professor gave a purely analytical solution to a problem in mechanics. He used kinematic equations and matrices. His solution occupied the whole white board.
I used a graphical solution to vectors and good old trigonometry. We got the same answers but my solution is easier and shorter. I wrote it all on 1/3 of the white board.
And because of Descartes’ algebraic approach, he totally missed what Leibniz discovered – vis viva, colloquially called kinetic energy. Just look at Descartes’ hilarious 7 rules for billiard ball collisions! Talk about fake physics! Leibniz had great fun with this – Descartes’ “I think therefore I am” meaning I think of diner therefore I am… Awesome!
It was Kepler before him who pointed out this mistake – something outside geometry, namely a physical principle, universal gravitation, or force, knowable yet not in the arithmetic whether Ptolemy, Copernicus, nor Brahe.
It is simply stunning Russell would try that later with Hilbert until Gödel had to again demolish the rubbish.
Looks like the lead here is again a call to Descartes resurrection, or Russell rehabilitation.
The Copenhagen “interpretation” with statistics is simply the Russell program (with lipstick).
How to deal with the glaring paradox at the core of physics – non-locality? Einstein with EPR put this question on the table. Bohm made it even more explicit after a chat with Einstein – both of whom were not satisfied with the exposition (see Hiley). To unravel this J.S. Bell of CERN wrote various papers with the “famous” inequality. It is something that needs urgent attention. Einstein wrote that our concept of causality is currently very limited and pointed to music – the causality of a Bach Fugue, for example, hints at these problems.
So much for (Principia) Mathematica.
Non-locality has become a dogma. Anyone who questions it gets ad hominem from dogmatists. I formulated a theorem to show the flaw in Bell’s theorem. I was pleased to learn that a physicist from Oxford had done the same before me. I corresponded with Dr. Christian since his paper has some similarity with my theorem although I’ve done it independently.
Here’s his paper published in the journal of the Royal Society
https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.180526
haha is Joy Christian still a doctor I thought they were going to revoked his degree.
They even set him up a Crackpot Randii challenge which Sascha Vongehr setup for him
https://www.science20.com/alpha_meme/official_quantum_randi_challenge-80168
Still waiting to see him do the simple challenge 🙂
BTW if you want to know what is wrong with the paper it’s trivial the pairs are anti-correlated they don’t even depend on the local reality a and b and the correlation answer is -2 it’s defined before you start.
More pseudo junk from Joy who has been told the problem over and over again but seems to thick to work out the problem.
I have not read the comments, so perhaps someone has already said this, but anyway. I dont think there is anything wrong with the scientific paper. I think mostly what is obsolete is what people are calling research. It something needs huge amounts of statistical analysis it probably means whatever the result is is not much use anyway. Pity science is government funded. I think a lot of govt funded science is just excuses for not getting a proper job and to get the mortgage paid. It research is about global warming its about BS. Much of pharmaceutical company research is also BS to comply with government research and cover up much of the truth. Also it appears that the more difficult the language and obscure the terminology the better academics like it. God forbid they write a paper that clearly explained what they have done rather than obscure it with goobledygook. And yes, I have a science degree, Ive worked in government scientific research, and yes, I have a very modest publication record.