From The Atlantic
Here’s what’s next.
The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.
The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.
The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.
Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.
What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”
Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)
The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.
Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself.
Bret Victor
Strogatz admired Victor’s design. He later told me that it was a shame that in mathematics it’s been a tradition for hundreds of years to make papers as formal and austere as possible, often suppressing the very visual aids that mathematicians use to make their discoveries.
Strogatz studies nonlinear dynamics and chaos, systems that get into sync or self-organize: fireflies flashing, metronomes ticking, heart cells firing electrical impulses. The key is that these systems go through cycles, which Strogatz visualizes as dots running around circles: When a dot comes back to the place where it started—that’s a firefly flashing or a heart cell firing. “For about 25 years now I’ve been making little computer animations of dots running around circles, with colors indicating their frequency,” he said. “The red are the slow guys, the purple are the fast guys … I have these colored dots swirling around on my computer. I do this all day long,” he said. “I can see patterns much more readily in colored dots running, moving on the screen than I can in looking at 500 simultaneous time series. I don’t see stuff very well like that. Because it’s not what it really looks like … What I’m studying is something dynamic. So the representation should be dynamic.”
Software is a dynamic medium; paper isn’t. When you think in those terms it does seem strange that research like Strogatz’s, the study of dynamical systems, is so often being shared on paper, without the benefit of his little swirling dots—because it’s the swirling dots that helped him to see what he saw, and that might help the reader see it too.
This is, of course, the whole problem of scientific communication in a nutshell: Scientific results today are as often as not found with the help of computers. That’s because the ideas are complex, dynamic, hard to grab ahold of in your mind’s eye. And yet by far the most popular tool we have for communicating these results is the PDF—literally a simulation of a piece of paper. Maybe we can do better.
Stephen Wolfram published his first scientific paper when he was 15. He had published 10 when he finished his undergraduate career, and by the time he was 20, in 1980, he’d finished his Ph.D. in particle physics from the California Institute of Technology. His secret weapon was his embrace of the computer at a time when most serious scientists thought computational work was beneath them. “By that point, I think I was the world’s largest user of computer algebra,” he said in a talk. “It was so neat, because I could just compute all this stuff so easily. I used to have fun putting incredibly ornate formulas in my physics papers.”
As his research grew more ambitious, he found himself pushing existing software to its limit. He’d have to use half a dozen programming tools in the course of a single project. “A lot of my time was spent gluing all this stuff together,” he said. “What I decided was that I should try to build a single system that would just do all the stuff I wanted to do—and that I could expect to keep growing forever.” Instead of continuing as an academic, Wolfram decided to start a company, Wolfram Research, to build the perfect computing environment for scientists. A headline in the April 18, 1988, edition of Forbes pronounced: “Physics Whiz Goes Into Biz.”
At the heart of Mathematica, as the company’s flagship product became known, is a “notebook” where you type commands on one line and see the results on the next. Type “1/6 + 2/5” and it’ll give you “17/30.” Ask it to factor a polynomial and it will comply. Mathematica can do calculus, number theory, geometry, algebra. But it also has functions that can calculate how chemicals will react, or filter genomic data. It has in its knowledge base nearly every painting in Rembrandt’s oeuvre and can give you a scatterplot of his color palette over time. It has a model of orbital mechanics built in and can tell you how far an F/A-18 Hornet will glide if its engines cut out at 32,000 feet. A Mathematica notebook is less a record of the user’s calculations than a transcript of their conversation with a polymathic oracle. Wolfram calls carefully authored Mathematica notebooks “computational essays.”
The notebook interface was the brainchild of Theodore Gray, who was inspired while working with an old Apple code editor. Where most programming environments either had you run code one line at a time, or all at once as a big blob, the Apple editor let you highlight any part of your code and run just that part. Gray brought the same basic concept to Mathematica, with help refining the design from none other than Steve Jobs. The notebook is designed to turn scientific programming into an interactive exercise, where individual commands were tweaked and rerun, perhaps dozens or hundreds of times, as the author learned from the results of their little computational experiments, and came to a more intimate understanding of their data.
“It’s incalculable, literally … how much is lost, and how much time is wasted.”
What made Mathematica’s notebook especially suited to the task was its ability to generate plots, pictures, and beautiful mathematical formulas, and to have this output respond dynamically to changes in the code. In Mathematica you can input a voice recording, run complex mathematical filters over the audio, and visualize the resulting sound wave; just by mousing through and adjusting parameters, you can warp the wave, discovering which filters work best by playing around. Mathematica’s ability to fluidly handle so many different kinds of computation in a single, simple interface is the result, Gray says, of “literally man-centuries of work.”
The vision driving that work, reiterated like gospel by Wolfram in his many lectures, blog posts, screencasts, and press releases, is not merely to make a good piece of software, but to create an inflection point in the enterprise of science itself. In the mid-1600s, Gottfried Leibniz devised a notation for integrals and derivatives (the familiar ∫ and dx/dt) that made difficult ideas in calculus almost mechanical. Leibniz developed the sense that a similar notation applied more broadly could create an “algebra of thought.” Since then, logicians and linguists have lusted after a universal language that would eliminate ambiguity and turn complex problem-solving of all kinds into a kind of calculus.
Wolfram’s career has been an ongoing effort to vacuum up the world’s knowledge into Mathematica, and later, to make it accessible via Wolfram Alpha, the company’s “computational knowledge engine” that powers many of Siri and Alexa’s question-answering abilities. It is Wolfram’s own attempt to create an Interlingua, a programming language equally understandable by humans and machines, an algebra of everything.
It is a characteristically grandiose ambition. In the 1990s, Wolfram would occasionally tease in public comments that at the same time he was building his company, he was quietly working on a revolutionary science project, years in the making. Anticipation built. And then, finally, the thing itself arrived: a gargantuan book, about as wide as a cinder block and nearly as heavy, with a title for the ages—A New Kind of Science.
It turned out to be a detailed study, carried out in Mathematica notebooks, of the surprisingly complex patterns generated by simple computational processes—called cellular automata—both for their own sake and as a way of understanding how simple rules can give rise to complex phenomena in nature, like a tornado or the pattern on a mollusk shell. These explorations, which Wolfram published without peer review, came bundled with reminders, every few pages, about how important they were.
The more of Wolfram you encounter, the more this seems to be his nature. The 1988 Forbes profile about him tried to get to the root of it: “In the words of Harry Woolf, the former director of the prestigious Institute for Advanced Study in [Princeton, New Jersey]—where Wolfram, at 23, was one of the youngest senior researchers ever—he has ‘a cultivated difficulty of character added to an intrinsic sense of loneliness, isolation, and uniqueness.’”
When one of Wolfram’s research assistants announced a significant mathematical discovery at a conference, which was a core part of A New Kind of Science, Wolfram threatened to sue the hosts if they published it. “You won’t find any serious research group that would let a junior researcher tell what the senior researcher is doing,” he said at the time. Wolfram’s massive book was panned by academics for being derivative of other work and yet stingy with attribution. “He insinuates that he is largely responsible for basic ideas that have been central dogma in complex systems theory for 20 years,” a fellow researcher told the Times Higher Education in 2002.
Wolfram’s self-aggrandizement is especially vexing because it seems unnecessary. His achievements speak for themselves—if only he’d let them. Mathematica was a success almost as soon as it launched. Users were hungry for it; at universities, the program soon became as ubiquitous as Microsoft Word. Wolfram, in turn, used the steady revenue to hire more engineers and subject-matter experts, feeding more and more information to his insatiable program. Today Mathematica knows about the anatomy of a foot and the laws of physics; it knows about music, the taxonomy of coniferous trees, and the major battles of World War I. Wolfram himself helped teach the program an archaic Greek notation for numbers.
All of this knowledge is “computable”: If you wanted, you could set “x” to be the location of the Battle of the Somme and “y” the daily precipitation, in 1916, within a 30-mile radius of that point, and use Mathematica to see whether World War I fighting was more or less deadly in the rain.
“I’ve noticed an interesting trend,” Wolfram wrote in a blog post. “Pick any field X, from archeology to zoology. There either is now a ‘computational X’ or there soon will be. And it’s widely viewed as the future of the field.” As practitioners in those fields become more literate with computation, Wolfram argues, they’ll vastly expand the range of what’s discoverable. The Mathematica notebook could be an accelerant for science because it could spawn a new kind of thinking. “The place where it really gets exciting,” he says, “is where you have the same transition that happened in the 1600s when people started to be able to read math notation. It becomes a form of communication which has the incredibly important extra piece that you can actually run it, too.”
The idea is that a “paper” of this sort would have all the dynamism Strogatz and Victor wanted—interactive diagrams interleaved within the text—with the added benefit that all the code generating those diagrams, and the data behind them, would be right there for the reader to see and play with. “Frankly, when you do something that is a nice clean Wolfram-language thing in a notebook, there’s no bullshit there. It is what it is, it does what it does. You don’t get to fudge your data,” Wolfram says.
To write a paper in a Mathematica notebook is to reveal your results and methods at the same time; the published paper and the work that begot it. Which shouldn’t just make it easier for readers to understand what you did—it should make it easier for them to replicate it (or not). With millions of scientists worldwide producing incremental contributions, the only way to have those contributions add up to something significant is if others can reliably build on them. “That’s what having science presented as computational essays can achieve,” Wolfram said.
HT/PeterL
*The* from the beginning.
I repaired it , Emory.
Thanks,
Bob
Perception has changed more than anything
Peer review does not mean it’s right…
…but by the time someone takes an interest, tries to reproduce it…publishes something counter
the damn paper has been cited 10,000 times, it’s lazy science
Good point. I don’t see much new here.. we’ve used Mathematica for many years, it has useful computation tools. But I don’t see how one can conclude “the scientific paper is obsolete”. Just plotting data in a different way to improve understanding is nothing new…it’s what publishing scientists should try to do anyway.
It’s evolutionary rather than revolutionary.
I agree. My primary pick with scientific papers that depend upon computers pertains to the difficulties with replicating, or worse, vetting the proffered results. Transparency demands that the source code, data sets, build scripts, and driving scripts be available for examination/reuse, as well as a full disclosure of the configuration of the underlying software (e.g. Mathematica versions/libraries, C/C++ compilers / libraries).
When I went through the East Anglia CRU data dump of 2009, the simulation software released was in lamentable condition from a software engineering perspective – an absence of version control, coherent structure, documentation, and revision history/rationale. However, the embedded comments were revealing, admitting that the papers already published at the time contained results that could not be replicated by the simulation software, that some poor devil (a post doc?) had been hired to tweak the code to get it to reproduce the already-published results, with a rough-and-ready overview of that person’s various attempts to ‘fix’ the software. This is probably the most egregious example of incompetence and malpractice one could cite, but I am confident that someone will eventually try something more audacious.
I’ve looked into climate computer models. They are garbage, from the computer science perspective. From a physicist perspective, they are anti-physical.
Even the simple ones have bugs. It took me five minutes to find this bug: https://github.com/ddbkoll/PyRADS/issues/2
Imagine what I could do with those models if I would get money for finding bugs, not doing it for free just because I was annoyed by some facebook climastrological propaganda 🙂
Bingo!
I published a paper on Joe d’Aleo’s website in January 2008, here:.
CARBON DIOXIDE IN NOT THE PRIMARY CAUSE OF GLOBAL WARMING: THE FUTURE CAN NOT CAUSE THE PAST
http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/
I included my text, my data sources and an Excel spreadsheet that contained all my data and calculations. I think that should be the standard for all scientific papers – otherwise, how can one ever properly critique?
The paper was critiqued on Steve McIntyre’s website, but that critique failed – it falsely alleged that the following close relationship was “spurious correlation” – that allegation was obvious nonsense.
http://www.woodfortrees.org/plot/esrl-co2/from:1979/mean:12/derivative/plot/uah5/from:1979/scale:0.22/offset:0.14
My conclusions were confirmed and extended (maybe a bit too far?) by Humlum et al five years later here:
THE PHASE RELATION BETWEEN ATMOSPHERIC CARBON DIOXIDE AND GLOBAL TEMPERATURE
Ole Humlum, Kjell Stordahl, Jan-Erik Solheim
Global and Planetary Change, Volume 100, January 2013, Pages 51-69
https://www.sciencedirect.com/science/article/pii/S0921818112001658
I generally agree with Humlum’s’ first three conclusions, as follows:
1– Changes in global atmospheric CO2 are lagging 11–12 months behind changes in global sea surface temperature.
2– Changes in global atmospheric CO2 are lagging 9.5–10 months behind changes in global air surface temperature.
3– Changes in global atmospheric CO2 are lagging about 9 months behind changes in global lower troposphere temperature.
Critiques of Humlum et al were similarly nonsense, and failed to refute their three core conclusions. See his key plot here:
https://www.facebook.com/photo.php?fbid=1551019291642294&set=pob.100002027142240&type=3&theater
Eleven years after my paper was published, scientists are still arguing the magnitude of climate sensitivity to increasing atmospheric CO2, but they are really arguing about by how much the future is causing the past. 🙂
The powerful tools allowing simplification of analysis and jazzing up of data presentation changes narratives and invites manipulation of thought. Short cuts to thorough and detailed analysis .. is very dangerous. Allows presentation to become the message rather than “the actual data” and supportable analysis. Look at any global warming paper with multicolor mapping of modeled data… are you studying the modeling (or better, are the presenters concentrating on their analysis rather than the muilticolor pictured result to get your attention and support for their work. A lot of four color mapping is guilty of fake science. The messaging is more important than the capital S science.
It used to be that peer review did attempt to challenge the author’s statements. Today, it is more pal review and papers are rubber stamped. Any attempt to challenge these papers after publication is usually discouraged by the journal in which they appear.
Alas, today the magic words, “published in a PEER-REVIEWED journal” carries the golden seal of truth, accuracy, and good science in the collective mind of the public. This is used to bludgeon anyone having valuable input on climate change, for instance, “I refuse to pay you any attention because I can’t find any of your papers “published in a PEER-REVIEWED journal”.” This is nearly always uttered by some slacker who would not recognize a scientific journal if struck on the noggin with it.
We live in strange time….
It works correctly in some fields and pretty much 99% of published papers have holes punched in them and are junk value. If you aren’t getting those sorts of numbers then the field isn’t working correctly as you really want to test the limits of theory.
As an example running around testing gravity works exactly the same at every point on Earth gives you no more confidence the theory is right and funny enough it wasn’t. You want to try and find places the theory might break and test those.
Thank you Pamela – I agree with your above statement, especially as it relates to climate science papers. Pal review of utter falsehoods/frauds such as the Mann hockey stick papers (MBH98 and successors) has turned this field into a cesspool.
My decision to read a climate paper is first governed by the authors: If the authors have a track record of integrity and discovery than I read them – I cannot be bothered wasting time reading anything from those who have a history of fraud.
This is a big time-saver, but on the off-chance that the hockey team or minions come up with something truly worthwhile, I might miss it. NOT! Just joking! The probability of this happening is so infinitesimally small that it is well-worth the risk. 🙂
I would prefer a simple bullet point format that is restricted to background information, methods, data results, conclusions. No “opinion” would be allowed in any of the bullets. For example, if you publish a paper that shows decreasing sea ice, the conclusion bullet point would state, …. pp our results support the hypothesis that sea ice decreased within the context of the methods. There would be no mention of “due to man made global warming” unless your empiric data was designed to address that question. Those statements would be restricted to the paragraphs of a new section called editorial, but it would be made clear that a researchers opinions about the data are separate from the data themselves and not valid to be used as evidence for future hypothesis.
A Scientific Paper is:
a) An abstract
b) A conclusion
c) Some raw data
d) Some blurb about how the raw data led to the conclusion (usually in computer code)
e) Further blurb about the uncertainties involved (usually statistically based and sometimes practically too)
f) References
If you really want to make a paper dynamic you need to do the following:
• In c put a search engine on the raw data.
• In d make the code fully accessible with lots of notes.
• In e standardise the uncertainty calculations with lots of notes
Sections a, b and f are all fine as PDFs.
None of which makes the paper any better, any more usable or any more likely to be right 🙂
It would make it more usable.
If it’s wrong it doesn’t make it any more usable in anyway, unless you define it makes it easier to regurgitate an error 🙂
All papers are wrong eventually. If it’s easier to use it’s easier to work out why.
I don’t think it makes it any easier to work out why at all. Again go back to Newton and reading and understanding all the papers under the sun that quote Philosophiæ Naturalis Principia Mathematica won’t help you understand why the theory breaks.
I suggest we agree to disagree.
I think clearly written papers that are easy to follow are a good means of communicating ideas. Whether those ideas are right or wrong.
I would also add link to a translation of a science paper that changed the world … it is 3 pages.
https://www.fourmilab.ch/etexts/einstein/E_mc2/e_mc2.pdf
No abstract, No raw data, Some mathematics and no references and it rocked the world 🙂
Aah, Albert, you made a small error in the last equation on page 2.
So it’s really E=mc^2.718281828459
Never mind – just truncate it to 2 – it sounds a lot better! Kinda catchy!
🙂
Don’t worry about it Albert, nobody will notice, not even in a hundred years!
You know how it is, everybody pretends to understand you, but nobody really does – they are all too embarrassed to admit it!
Relax! Let me buy you a schnapps!
Beautifully written, and the translation looks good.
Talk about pulling the carpet from under the referenced consensus on what was “self evident” – mass, inertia, radiation, energy, space, time! Simultaneity already gone, gravitation was next. Around that time he showed continuous propagation to be quanta.
Reasoned knowable principles are indeed immensely powerful, and outside maths as Gödel showed destroying in much the same way Russell’s Principia Mathematica . There seems to much nostalgia for that discredited work.
it’s all about hyperlinks, popups, search
paper can’t do what html does.
and it takes a 14 year old to manage html
grownups invaded the net with no skills. who would pay to have their text hosted on ancient BBS format?
blogs mark the decline of the internet, tbh.
it was all downhill when the invaders came to turn it into TV.
“it’s all about hyperlinks, popups, search
paper can’t do what html does.”
err yes it can. Take a look at R markdown or R notebooks.
Think of it as a markup language just like html, except certain parts of the document are
executable.
I was referring to the time when you had to wait weeks for halftones and proofs and you mailed out reprints in response to written or typed requests that came in stamped envelopes – that was paper.
To err, I guess, but non carpe tedium, please. 🙂
Please have a free copy of this, as it is somewhat pertinent.
https://principlesofscience.files.wordpress.com/2018/04/principles-of-science-and-ethical-guidelines-for-scientific-conduct-v9-0.pdf
Philosophiæ Naturalis Principia Mathematica by Sir Isaac Newton is one of the most important papers/works in the history of science and would be easily the basis if not directly quoted science paper of all time.
Just a shame it was a complete dead end and really badly wrong and no amount of analysis would have realized it.
Sorry the basic question in science comes back to does the theory explain all observed data. Ideally what is supposed to happen is to get peer review from your harshest critic. If you want to see how it is supposed to work take a look at the Bohr–Einstein debates and them going at it 🙂
It seems a bit extreme to classify classical mechanics and newtonian celestial mechanics as “badly wrong”, it is actually such an extremely good approximation that it took 200+ years until it was materially improved upon by relativistic physics and quantum mechanics.
You are confusing useful with right/wrong. We have lots of rules of thumbs in our life that are useful because they work in a small range but ultimately actually wrong.
Lets list just two of the big problems with Classical physics. It’s badly wrong because it leads you to a dead end and really horrible understandings.
1.) Energy is only something that is used for accounting it isn’t intrinsic. You have to butcher in all stored energy like Nuclear,Magnetic and you can’t even have an empty patch of space have energy.
Google the answer of how a permanent magnet holds a suspended load from a roof beam indefinitely. Under classical physics you have to say there is no work being done so there is no energy involved and that will be the standard answer. There is a massive amount of energy involved in doing the trick it just lies totally outside classical physics (the same happens with atomic physics) and even most layman sniff that there has to be energy involved they just cant understand from where.
2.) Any action at distance drives it into complete meltdown because you need an extra dimension and have the problem of the interaction speed and how these things that are at distance sense each other. So basically anything with a field or stored energy in a field requires rather imaginative and creative answers.
LdB
This guy, Einstein, gave us some thing, like in the most simple form expressed in his equation;
E = mc2, like “mc square”.
Considering the c2, “c square”, being a constant what is the difference between “E” “m”?
According to that equation, one happens to just be a “very high dense storage” in consideration of, to, the other.
Really confusing! The difference between E and m… energy and mass… maybe simply density of the medium, in structure!
The amount of energy in relation to mass, does not depend in the nature or property of the matter, but actually in the value of the mass, of whatever matter,regardless, whether either the matter happens to be gold, silver, iron or wool, the amount of energy depends in the value of the mass…where one may consider the amount of energy is same as per any kg of mass, regardless of the mater or substance of the mass.
Maybe I do not understand much of Einstein’s or Newtonian physics, but just trying to.
Hope not making a mess here.
cheers
The problem here is to do this I need to undo everything you have been taught .. so if you want to try the ten second versions here goes.
The universe is not just made up of just energy and matter it is has things called fields. Particles and matter are manifestations of localizations withing the fields (like cyclones, storm fronts etc are in weather) and which extend everywhere within the universe. In fact you can easily define what makes our universe by defining the universe as any patch of space that has the fields and outside the universe as anything that does not. What is important about the fields is they define energy in our universe (try doing that in classical physics).
The way to visualize a point in space under that theory is as a spin where the fields fold back on themselves so every 720 degree the cycle repeats
.gif)
That is the definition of a quantum spin and it occurs at every point in space and although here we show 3 fields there can be many many more. The discovery of the Higgs proved was that there was indeed another field beyond the ones we already knew.
So mass and particles are nothing more than a stable arrangement in the fields. Those stable arrangements can move thru the field. Any change in the fields will be covered by a term we call energy. You now have the modern view of physics and what underpins the standard model.
Using that theory you can now cover every known thing in physics except gravity, suspected of being another field or fields but not proven.
Why it unsettles people is because for some reason being just a manifestation in a bunch of fields seems to not as good as being made of matter in classical physics. If you don’t like the idea I suggest you take up religion because you have to establish matter as some sort of divine property because as far as we can tell it’s nothing but a bunch of field interactions.
That little detail about gravitation which is after all spacetime, means Einstein’s work and approach is not finished, and will never be – it is ongoing. It was a little detail in 1900, the UV blackbody catastrophe, that Planck refused to ignore. Just before Bertrand Russell claimed physics end-times, only decimals to be added. Just look what happened!
The glaring paradox of non-locality, brought into contrast very nicely by J.S. Bell, concerned Einstein a lot. The Bohr-Einstein debate is central.
And by the way replacing the classical world of matter with 2 quantum fields changes nothing – from where do these fields come from ? They are assumed to be “given”. There is still that question.
Where a QM field comes from is a religious or philosophical question at the moment because science can’t measure there and has no data at the moment. We might have some data some time down the track but for now feel free to make up any answer you like 🙂
LdB
January 2, 2019 at 6:20 pm
————–
Really sorry for your problems mate…
I am not claiming that I actually can sort out any of your problems or grievances with
the Einsteins physics…simply trying a point out that you or me neither are Einstein.
So think before you try to dismiss the proposition of this guy’s physics…is not
as simple as you may think, or as simple as any other contemporary Phds in physics think.
A lot of the latest silly Phd children do not much like this guy’s physics either…
you are not alone there, mate…plenty of wanabe ones.
It happens to be an obstacle, but so be it, for as long as it stands firm…regardless.
As far as I can tell, am not even slightly surprised, as you neither the first or the last one to try a watering down of Einstein…and his physics…
Please do keep trying…good luck with it.
cheers
Science doesn’t care what you or I think .. prove something or walk.
“Using that theory you can now cover every known thing in physics except gravity, suspected of being another field or fields but not proven.”
General relativity is a field theory. Not compatible with the fields in Standard Model but gravity is also a field with its own symmetries. The hypothetical graviton is an attempt to unify gravity with the Standard Model.
LdB
January 3, 2019 at 11:43 pm
—–
LdB.
Tthe main point that I tried to subject you to is simple, as you put it;
“Science doesn’t care what you or I think .. prove something or walk.”
Simply you and your likes have to take a walk, as in the end of the day as Einstein proved something that no one even in the prospect of your claim can not ever disprove.
When all of it still based in Newtonian physics…
Till then you and every one else involved got to be told, as you put it;
So, according to all this you really got to accept, a total failure, and the consideration of taking a hike when at it…regardless, of anything what so ever.
You or any one else ever considering that what they can offer better than Newton and Einstein, requires that both this guys happen to be nullified and falsified in their science.
You think you one of such as, or you think there some how happens to really be some such guy there…please do consider that that may be the most clear stupidity ever claimed in the matter and subject of physical science.
I, personally have no problem, to accept any one what so ever, as better than any of these two guys in physics, but it requires the accomplishment of the simple clause, nullification and falsification and also when at it a better scientific explanation, than offered by these two Giants of physics.
From the point I stand, no one can ever nullify anymore the Einsteins physics, with some very high impossible falsification to consider also.
So be my guest in considering that your understanding about the universe and existence is better or much better than Einstein’s…please do keep try it that way.
Maybe you can have guys like me accept, one day, that really there is guys like you that know or could offer better than Einstein or Newton.
Please do keep trying on this…you never know, you may just prove indisputably one day,
that the likes of you really happen to be better in science of physics than Einstein while combined to Newton….
Miracles some time may really happen…but in my accord and understanding this one definitely not going to be the case…but hey, keep do try, you never really know till you tried it.
Try a convincing nullification in Einsteins physics…and you get the price, oh well only half of it, as the other half requires the clause of falsification achieved…
If you or any one else can’t, than it is very simple, as simply as you your self put it;
” prove something or walk.”
where lies of me may add;
“When till then, you stop messing around or crying around about of it all”
Prove something against Einstein or Newton, or take the walk…simple as that.
Where shame does not much matter anyways…as far as time and space concerned.
cheers
“all stored energy like Nuclear,Magnetic and you can’t even have an empty patch of space have energy.”
Classical physics has fields (“empty space with energy”) Maxwell’s classical electrodynamics is a field theory. Faraday invented the magnetic lines of force, which permeates empty space. Gauss’s law is a gravitational field.
“how a permanent magnet holds a suspended load from a roof beam indefinitely. Under classical physics you have to say there is no work being done so there is no energy involved”
No work done but there’s potential energy. When you drop it, potential energy is converted to kinetic energy. Newton knew that ever since he watched an apple fall from a tree. The Bernoulli equation is a conservation of potential and kinetic energy.
haha surely you know the problem with that rubbish.
You have to project the field out to infinity in every direction right so every piece of space has the energy from every source. Therefore you fire anything charged particle into space and it has to be cutting the field lines and doing work and the charged particle has to come to a halt.
In fact there is no retarding force on any charged particle with constant velocity in the vacuum of space.
You do get all this junk is known for 100 years .. right?
Have you read it, LdB? I have. It took me a year to get through the first 19 pages, because they contained all of the mechanics I had learned in acquiring a Bachelor’s and Master’s in Mechanical Engineering from Purdue University. When it comes to the “law of gravity”, things get even more interesting. Richard Feynman was preparing a lecture for PhD students at CalTech, and attempted to prove the inverse square law of gravitation the way Newton had done. He failed, mainly because Newton had used deep insights into the properties of second order curves that are no longer mainstream. Switching to more modern methods, Feynman remarked that the proof Newton produced would work given genius, and infinite time.
Newtonian astrophysics isn’t the last word, though it suffices (with refinements even Newton knew it needed) for everything we do in space exploration.
Newtonian physics, however, is remarkably unshakable. The laws of conservation of momentum are invoked at the quantum and astrophysical levels, even if they involve transformations for velocity or dimensional scaling. In fact, Newton’s three laws of motion stand as the three only statements by a human mind that have never been modified or violated.
Never really bothered with Newton sorry, once I did a QM field unit it becomes obvious it was just science history which never really interested me.
I disagree with the idea that in modern science Newtonian physics is good enough it isn’t, I thought LIGO results would have made that clear. You now know the universe has truck loads of things called black holes where the theory totally dies and so you need to be extremely careful using it on any scale that may include them. It also has the breakdown problem at the small scale which has always been known. It is a useful approximation at the human scale and will probably remain for some time yet but it will rarely become of any interest in cutting edge papers.
I should also whenever I explain Newtonian physics to undergrads I make sure the listener understands that it is WRONG because the theory has one HORRIBLE side effect it makes a setting of a universal frame of reference. That one thing will cause more problems for a student because if at lower school the teacher never explained it’s wrong, you seem to have to use a baseball bat on their head to make the look at the data to show it’s wrong.
So I guess if I had to argue what is most wrong about Newtonian Physics it is that universal reference frame not using the formulas.
BullStuff! The whole notion that computation/Mathmatica is the science and the thought process of the scientist is not is someones opinion. Quote an engineer from Apple? Guess what his opinion is. I organized and ran a digital data exploration research group for one of the worlds known billionaires and we had a computer expert for our workstation, a SWIR spectrometer operator for ground truth, and myself for selecting the target parameters in the field, in reference to utilizing Supervised Classification and Spectral Mapper Algorythm to find target alteration. Guess who went on to make large bonuses? Comp? NO. Spectrometer? NO. Choosing ground truth for the target? YES. Ask yourself this, if Einstein were to spring into action now would he need a computer to visualize his experiments? NO. Students go to college to learn how to read, interpret, and utilize the ideas particular to their field, and scientific reports continue to feed into their thinking.
Yep completely agree this rubbish it is junk for minor papers of little to no significance. When you get to this sort of level you really are dealing with minor junk in a field.
I get a monthly newsletter from my alma mater. Each month it’s a glory sheet of the papers published by the current faculty. Each month I look at what they’ve done and more often than not, think, “This simply isn’t very interesting or important.” Each month the so-called “research” seems more and more trivial and less and less worth doing. If I had it all to do over again, I’m not sure I’d study in the same field. It just doesn’t seem fun or interesting any more. But oh lordy do the publication lists on the CVs get longer and longer.
Several readers may note that I introduced The SAFIRE (electric sun) work to you a while back, as I am excited about the possibility of low-cost energy displacing wasteful burning of hydrocarbons to make expensive electricity.
SAFIRE, you might note, brings up the possibility of LENR transmutaions in controlled plasma layers, and is being researched by some folk who only wanted to prove that the solar furnace was electrically controlled, not just a fusion oven. Boy, were they surprised when that found that “small” plasmas were self-controlling in a laboratory environment, hot enough to cause transmutations.. But nowhere did I see they were up to using Mathematica to understand what is going on. Anyone with a foot in both areas, please bring this excellent WUWT post to their attention.
So you publish your theory and if the data matches it then it’s correct and you would get the credit … what more do you want????
Took years for the science community to realize Peter Higgs was right.
If you are right well all accolades to you and if your wrong no-one will know you name that is how it works.
In reading this post my first reaction was: “paper is useful, if not essential, like Occam’s Razor”: if you can’t communicate clearly, and succinctly make your point on paper, you are talking irrelevant BS.
“Science self-selects for bad writing” Stephen J. Gould
Loved MathCad in the 90s when it came on two floppy disks and cost $50. Who can afford today’s $12,000 monstrosity? Another great application like AutoCad and OrCad, belled and whistled into a towering cluster that nobody can afford except the off-shore folks who just steal it.
It is definitely better to produce textbooks and papers in interactive formats with all data present and all algorithms explicitly presented.
Maybe it needs a decent open source effort that gets standardized, or gets incorporated into Excel, so people like me who really need it only once or twice a year don’t have to take out a second mortgage. Meanwhile I just live with Excel despite the quirks and unreadable obscurity. Can you imagine publishing a paper in Excel? =SQRT(B1*B1+C1*C1) Pythagoras would choke on his beans.
You now have OpenScad that is free. I love it to make 3D printing models.
Another thing. How accurate is the implementation of the algorithms in these huge math/statistical packages? Usually there is no way to find out.
Also nobody has yet managed to make a statistics package good enough to tell the user: “You can’t do this because….” except for some of the most simple cases, and mostly not even then.
Easier:
=SQRT(B1^2+C1^2)
It looks like Bruce is an old programmer. I deduce this because the code operations for C1 * C1 is significantly faster than it is for C^2. So Bruce is using faster code to do the same thing.
Depending on how good your compiler is, it may be already replacing C1^2 with C1*C2.
For example, most ‘C’ compilers have been replacing multiply and division by factors of 2 with left and right shifts for years.
Computers have the problem that there are many math problems that can’t be calculated exactly. There is always a limit to the accuracy of calculations using numbers that have an infinite number of places- 1/3=? pi sqrt(-1).
algorithms can get close, but they often simply can’t match the actual physics of what it is happening and result in various truncation errors.
I appreciate what you’re saying, but I don’t think “Pythagoras would choke on his beans…” because certain code operations are faster than others. I’d wager a right triangle that Pythagoras didn’t know about codes. I was looking more at the clunky way of writing the equation which, not to second-guess Bruce but, I think he was talking about.
Perhaps they could create a read only version for those of us who aren’t into publishing but would still like to review existing papers.
For example Adobe provides a free read only pdf viewer. Author’s are more likely to spend the money to buy the full version of a program if they know that people won’t have any trouble reading what they have written.
“It is definitely better to produce textbooks and papers in interactive formats with all data present and all algorithms explicitly presented.” would have ruined the Hockey Stick fraud.
AutoCAD… My first love from back in the 90s…
Main advantage with AutoCAD is the .dwg format is a ‘safe’ one many low end users feel comfortable using and that it still does 2D drawings very well. (Most 3D packages do 2D very crudely if there is no supporting 3D models, cause they are 3D packages and don’t want you to do stand alone 2D. I have found that if I need an unsupported 2D view it is quicker to AutoCAD it and import it.)
The main disadvantage with AutoCAD is that because people have been using it for so long they do NOT want old features removed. 90s trained fossils like me still expect to be able to use keystrokes and so nothing really gets removed with each update. Dip below the surface and there are some hideously ugly interfaces in that programme.
(also doesn’t play nice with config management software, but we digress)
“Craig from Oz January 2, 2019 at 3:28 pm
The main disadvantage with AutoCAD is that because people have been using it for so long they do NOT want old features removed.”
And that causes problems with organisations, the one I currently work for, who do not want “features removed”, or drawings written in an old version, in this application but also want it to run under Windows 10. It’s ok, Windows 7 support ends in 2022, so some time yet. LOL
Sorry about this, a hobbyhorse of mine…
Would Mathematica enable an exploration of Wigley’s ‘why the blip’ and ship losses in WWII? Or, more generally, if there is a correlation between the amount of light oil spills/runoff and warming?
JF
It does seem a bit strange that Science itself, which by its very nature is supposed to be cutting the edge, would be so slow to adapt to something so important, and so close to its very essence.
“Science advances one funeral at a time.”
Those at the cutting edge of a field already know what papers are important and what are junk because you can quickly filter them by your knowledge. The problem comes with knowing who is really cutting edge, who is crackpot and who is just plain wrong and that requires knowledge of the field 🙂
The thing is those who really are cutting edge will punch a hole in the existing theory that can’t be ignored, even if a consensus view exists.
Those at the cutting edge generally make small steps forward, but seldom punch a hole in existing theory. They have generally worked themselves into a biased corner. Outsiders often make the punch through, because they are not limited by the “cutting edge” bias.
I don’t think there is a lot wrong with the traditional scientific paper. It is a highly stylized literary form, true, but so is a sonnet, and nobody complains about that.
If there is a problem with the length of papers nowadays it is that they are too short, not too long. The more prestigious journals only accept short papers which means that most of the real meat gets tucked away into “Supporting Information”, (if you are lucky, otherwise nowhere).
And I don’t want more graphics. I want less. Far too often the data you are looking for is only to be found in some flashy diagram where you have to count pixels to find out (approximately) what is really going on. Not to mention how often these complex graphics are used to hide “Mike’s Nature Trick” or something equivalent. I frequently long for the good old table instead.
The problem is the type of papers you are dealing with in that instance are correlation type so it already hangs on some assumptions and is just checking the correlation matches theory. Correlation papers are really not that important (unless they fail the theory) because they rely on the underlying theory being right.
One of the interesting questions would be to ask a Climate Scientist to list what they consider the really really important papers in the field.
The most appropriate form for a scientific papers seems to me to be a web page, which would have everything provided in the written publication, but might also have interactive graphics, raw data, additional commentary, links to related papers, etc.
The core problem, I believe, is the “publish or perish” paradigm. Once status and money were granted based on the simple act of publishing something, without regard for actual results, the floodgates were opened and the literature was overwhelmed with doubletalk and nonsense, all designed to ensure monetary gain, not understanding. And now there is a lot of money to be made by refusing to change anything, and no money to be made by reform. So, the system is hopelessly corrupted. Today’s “scientific” papers mean no more than the endless theological treatises put out by 15th century monasteries – and they will be forgotten even more quickly.
Decades ago I was a graduate school representative for a computer science MS. It was understandable, as best as I could tell, and indirectly made a good case for numbers. If X and Y and beyond is everything in some fields, it doesn’t seem to always work in others. Biology and Geology and even some Chemistry come to mind. Field work in marine biology is heavily designed for computer analysis, but often seems to bring out the difficulties in determining cause and effect. Too many papers with lots of numbers just claim what we knew long ago, or still don’t know.
I would like to see an analysis of the evolution of the scientific paper, the good, the bad and the ugly. Then have the designers pay attention to the needs of the scientist more than the utility of the computer . Maybe it’s just imprinting from papers past, but when the rare paper comes along that requires lots of thought, I have to print it. Easier to flip from page to page and multiples.
Computers are helpful, so are pencils. Pencils seem freer. I would add that too many papers now are showing off with mass, often with their lack of history.
What you get to do is subject the data to an ever more dizzying array of mathematical tools. You just keep throwing different things at the data until it confesses.
The thing is that you probably don’t understand what you’ve done. You have no idea if it was legit or not. The chance of any of your reviewers being able to call your BS is small, so you’re OK. You get another publication, and if your results suit the purposes of another researcher, you may even get a citation or two.
It wasn’t dendrochronologists who called BS on the hockey stick, it was a couple of statisticians.
Sorry, Dr. Wolfram, GIGO applies equally to you.
It was not dendrochronology that was the problem with the hockey stick, so dendrochronologists, per se, had nothing to do with it. The problem lies in dildoclimatology, i.e., pseudoscience combined with “novel statistical methods.”
The problem with climate science is not with how papers present information, but that a political organization became the arbiter of what is and what is not climate science, replacing peer review with conformance to a political narrative.
The political left is driven by emotional arguments that defy logic and climate scientists allowed a far left political bias to break climate science by enabling the IPCC. I blame scientists on both sides for allowing this to happen, allowing it to persist and allowing the broken science to gain so much traction it may be impossible to fix.
More baldly, the problem is not so much with the mode of presentation as with those who are doing the presenting. Defective thinking is by no means restricted to the political left.
Thomas Huxley put it well: “Sit down before facts like a child, and be prepared to give up every preconceived notion, follow humbly wherever and to whatever abysses Nature leads, or you shall learn nothing.” However, he must have been aware how hard, if not impossible, it is for most people to give up their precious prejudices — particularly when their livelihoods and careers are intimately connected with outcomes.
Sceptical,
“Defective thinking is by no means restricted to the political left.”
Yes, defective thinking is common to anyone with an otherwise unsupportable agenda, regardless of their political or religious affiliation. More recently though, the thinking by the left has been particularly defective.
Scientific paper that everyone in the area of a particular research says is good, or equally everyone says is bad, is of no use, it either confirms what is already widely known or alternatively has no chance of making any impact at any time soon.
In my, may be very odd, view a useful scientific paper is the one that 50%+ may reject as no good (science is the ‘dog eats dog world’) and maybe 10%+ commentators see as a step forward, with the rest being just the science plodders.
While this article certainly applies to some papers, it does not apply to all. I’ve just completed a lengthy paper with six colleagues on a UV survey of Hawai’i Island. Our findings have wide application in the tropics, subtropics and temperate latitudes. There are no agendas and no modeling, just real data coupled with a wide variety of sky photos by conventional, fisheye, VR and UV cameras. There’s no better way to circulate our findings than to describe them in a scientific paper.
Agree scientific paper format ideal for most concepts. You can skip to “results” for what team tried to convey to gauge your interest & quickly double back to scan “methods” to finesse limitations. Nothing requires you to read the initial section straight through to the last section. Like HD commented earlier, printing out some content is useful for better grasp of intriguing research than eyes on a screen for my pre-wired intellect.
What exactly is the argument being made here? IMHO the purpose of the paper is communicate what was ‘discovered’ and how another may fin the same or similar results. How that is communicated is open for debate. Are we arguing that the format of what passes for ‘scientific papers’ is flawed? Perhaps, but why must one format be forced upon all? If brief explanations (abstract) methods, means, data and conclusions are included what forces the format to be printed material at all.
Printed words are rather permanent and do not require the ‘consumer’ (reader?) to have anything more than functioning eyes and literacy. If the concepts are poorly communicated it is the author’s failing not the reader’s.
So now with Mathematica math and science students can retire their imaginations for the excellent graphics of the computer program. Why should we struggle to imagine/theorize/visualize/understand what some convoluted formula or algorithm does, and maybe its apparent relationship to some natural processes, when Mathematica can show it all in a perfect virtual environment.
Overall does Mathematica generally lift academics to a higher things, or just allows more dullards, with passable computer skills, scrape through academia?
Undoubtedly the latter. I’ve seen a lot of horrific examples where dullards use programs like Mathlab or Mathematica without knowing in the least what they are doing or why.
For example I wondered for a long time why a lot of people used Butterworth Smoothing instead of something sensible like Gaussian. Butterworth smoothing is a smoothing algorithm used for filters in signal processing because it gives good results over a fairly large bandwidth and is easy to implement in hardware. It is however highly inappropriate for e. g. time series analysis since it causes considerable distortion near endpoints. It is however included in many math packages due to its usefulness in electronics/signal processing.
Then I realized that it is the first filter in several math packages with alphabetically ordered menus. A lot of “scientists” just chose the first one, probably thinking that it must also be the best…..
“…or just allows more dullards, with passable computer skills, scrape through academia?”
Does a pope sh!t in the woods? Is the bear catholic?
Reminds me of Spirograph art.
Science as practiced at the universities today involves a lot of virtue signalling. Some article on the Net today talks about finding dark matter around ancient galaxies 10 Billion light-years away. No one can find it around our own galaxy, or the nearest neighbors.
“Vertex” means “corner,” why not just say corner, why make me look up a word? The jargon makes most if not all scientific papers unintelligible to a typical educated person.
So glad I went into engineering, where we are expected to produce useful results.
So you want science to use the words you use for things because you are important. Sorry not even layman call things by the same name.
Try thong, lacky band, fart, banger, solicitor, jumper and a thousand others.
For what it’s worth more engineers I interact with use vertex for geometry but that doesn’t make it right, just more common to me.
“The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. ”
Probably the same as the ‘scientific’ calculations and cleverness that drove the financial markets into the ground in 2008/9
Kelly Johnson of Lockheed would restrict documents written by his engineers to no more than 30 pages. How successful was he?
To name a few aircraft that bore Kelly Johnson’s fingerprints:
Hudson bomber, P-80 which became the F-80 of Korean war vintage, Lockheed constellation, Lockheed Electra, the U-2 in nine months after receipt of order, the F-104 and the SR-71 Blackbird for an incomplete list.
Ah yes, the legendary Kelly Johnson. I remember reading that he gave his successor two pieces of advice on retirement: “keep your drawing system really simple” and “never do business with the US Navy”.
Somehow when reading the article I thought of the scribes in the Vatican in the early 1600s.
The labor involved in taking the data and forcing it into epicyclic orbits for the planets must have been endless.
Perhaps the computations are still there?
The thing about today’s “scribes” writing nonsense about “Global Warming” and “Climate Change” is that it will remain readily available as a record of folly.
Bob Hoye
No, not the case.
Read Gingery’s book “The Book that Nobody read” about his case work finding and reading some 300+ surviving annotated – and thoroughly well used! – first and second editions of Copernicus’ “de Revolutions” (his book proposing the planets’ circular orbits.)
ptolemy’s epicycles were actually more accurate than the circular orbits around a center sun proposed by Copernicus! The observations were not good enough to find ANY errors (except Mars) at all until Kepler about a century later used Tycho’s very long axis numbers to propose an ellipse. And even Tycho’s earth-centric circles were better than Copernicus’ first circular solar-centric circles.
So, no. There were no Vatican monks re-calculating epicycles to prove Galileo wrong. And, worse, IF the Vatican monks DID re-calculate Copernicus’ orbits, they would only have proved that Copernicus was, indeed, the one wrong! The original Greek numbers (ephemerals, I think the tables were called) were good for every year, every month, every observation ever made. Well within observation limits.
RA
Thanks –I’ll follow up on the book.
Bob
Kepler showed Ptolemy, Copernicus, Brahe were equipolent, all with a common error – geometry. Some principle outside geometry was at work – the first modern physics discovery. And that physical principle, never before hinted at, is gravitation, knowable, not of the senses.
Looking at Einstein’s General relativity in this light is then obvious – spacetime geometry and energy density (roughly) are inseparable.
The problem the Vatican had was Aristotle. Hilariously Galileo had exactly the same problem – writing he understood not one word of Kepler’s work. Talk about a faux fight!
My fellow Kiwi Ernest Rutherford said that if you need statistics, you should have done a better experiment. My Stats Prof taught us how to tell good Stats from bad. And how to find from the Methods section whether the Paper was worthwhile or rubbish. Simplicity is key, a child should be able to understand or it is Waffle. I do see some good clear thinking here above……Brett
The R langude has the same capabilities in R markdown and R notebooks
Its actually very cool. Whatever you write becomes and executeable document.
So you take your actual code, code that downloads the data, code than analyzes the data, code that creates the charts, and you embed it in your paper. That paper then consist of two kinds of text:
normal text where where you explain your results, and other text–the code–, that can be re executed as
the user wishes
The science paper actual DOES the work in front of your eyes.
The proper forum for a paper is online, with comments allowed. Although many times I do not understand an article here, I can get some understanding and see who is having the better argument from the points made in the comment thread.
A large problem I see with communicating results via electronic media versus written papers is longevity. Today’s computer software/graphics will likely be obsolete in 20 or so years and the results using them may not even be readable by whatever new software comes along. Written results are forever. I can go to a library and read the old papers from hundreds of years ago. Much of the programming and data I did back in the “old” days was on floppy disks or tapes, which I can no longer access. Heck, I can’t even access some of the data and graphing I did 10 years ago. I have submitted papers to online journals. Who knows if those journals will be around in 10 years.
This is truly a point worth remembering.
Now you’ve said it, I agree.
And wish I had realised it myself.
Which cookbook has the tastiest meals? I seriously doubt a computer program is going to be correct.
“Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. ”
Boy, if computers are generating data what are all the field scientists doing? The author does not seem to distinguish hard earned data from a model output.
In any case , this piece is just a PR piece to make Wolfram an idol. However, he, by his own admission a puffed up pilferer, a plagiarizer and a bully. I simply refer to poor academic who tried to have a word of his own at a conference. We dont know him now, probably was sacked and on Lithium pills. His product clearly now appropriates from everywhere and owns all the tears, sweat and efforts of all workers. Through inclusion but not hardworking Mr Wolfram includes in his shitty SW the works of thousands of dedicated people who created art, science and culture. And without paying a dime to them or their descendants.
History of science and even art are full of these bullies, who gets all the credit for others work. Just to name a few, Einstein (who remembers Mdm Maric, his wife), Brecht (who remembers his harem, Mdm Berlau for instance), and say Oppenheimer (who remembers General Groves).
Scientific publications, regardless of what this piece says, by definition are and should be publications. Not powerpoint slides. Words have meaning, not fancy bullet points and instagram pictures. Let us see if they can write a 20 pages of manuscript to describe a finding that consumed at least 3 years of their life. No, they simply pillage it from the its rightful owners.
Whoever wrote this is dumber than sh!t.
OMG… what a waste of electrons this article is …
Very interesting
“They depend on chains of computer programs that generate data,” 🙂
Sorry, but that explains everything.
The Scientific Paper is Obsolete
Agree. The progressive-leftists have abused the once-decent process so much & for so long, they have pretty much destroyed it. The “scientific paper” can no longer be trusted.
The person who does the peer review should always be named. If the paper is hogwash, he or she should be discredited as well as the journal that publishes the review. This would bring a massive improvement and promotion of real science.
I agree. And similarly, if I was president, I would make every reporter who asks me a question to begin his question with “My name is so-and-so and I work for such-and-such media organization”.
I would agree. I would also suggest looking at who is funding the research. This often gives a good indication of potential bias in the results. Sometimes the conclusions are written by those supporting the “research” which may or may not be supported by the data. I would never impune the integrity of a researcher in general but eating regularly and living in-doors is nice.
I think the scientific paper in computer interactive format is a great idea, if the computer language is transparent. For years, I used MathCad, which started as just a scientific word processor. But with the incorporation of the Maple engine, it also became a powerful computer algebra tool. The one thing it wasn’t, for a very long time, was something that would print all of the pretty equations worth a darn, and that was the original point. Over the years, MathSoft improved it to the point where it fulfilled all of my expectations – and then they sold it to PTC, who promptly destroyed it completely. They have slowly built back its capability in every area except graphics (which suck, big time), and as a bonus have begun to charge exorbitant annual license fees.
I checked out Mathematica, and while I love the power, it’s more than I need for engineering. Plus, it’s very difficult to master. I finally settled on Maple 2018, which has extraordinary power, and is relatively easy to master. Plus, it is reasonably priced for a piece of software of its capability. It has the capability to produce interactive scientific papers (and that’s one of it’s selling points), with just the same appeal as Mathematica’s. I’m not trying to hawk one over the other, but will say that there are many options for this approach to publishing, and I consider it a good thing.
Oh boy. Another example of “I like how this looks so it must be better”…
Scientific papers have only one purpose. To convey information accurately so that others can read, analyze, understand, and respond. Just because some form is trendy, does not make it better.
Simple is Best.
Apply Occam’s Razor.
1 Mass, charge, magnetic moment, spin, energy, isospin and angular momentum are due to one: spin.
2 There are equal numbers of particles and antiparticles, particles being the spinors that make up what we now call particles.
3 There are antiprotons in nuclei.
4 Alpha is 3 protons and 1 antiproton. (2 neutrons are p/p-.)
5 Basic potential between spinors is hbar*delta_v/r.
6 Gravity: ‘scattering ‘ of potential (or mass rays) by nuclei.
7 Nuclear geometry: everything fits in 0.853fm*A^(1/3) space.
8 Nuclear potential : p/p- attraction, p/p or p-/p- repulsion, gravity and rotational energy.
9 Charge: delta_v = c/137 (see 5)
10 Relativity: sum of spinors between two protons moving PLUS sum over sphere around test proton, using 5.
11 Relative mass: due to incomplete sampling of field by spinor on sphere due to acceleration.
12 There’s an extra spinor with v ~c/6 in the fringe of a lone antiproton for odd-A nuclei.
I am a millennial college student and I disagree with this. The real advantage of scientific paper is that it puts a lot of information in one read through. When writing a research paper for class in is easier to differentiate bull from legitimate studies reading a paper than clicking through multiple powerpoints. I know that scientists hide behind their jargon but often times their jargon gives them away.
I spent a year learning Mathematica, only to forget it by the time I actually needed it.
I’ve done a lot of C and awk, have done 3D in POVRay and currently openSCAD. Trying to learn Blender.
gnuplot for plots. No good solution for spinors moving on a sphere. (Python, but I don’t know it.)
WordPress and html5 for website. I’m not very good at it.
So I stumble along.
The peer reviewed printed and bound journal idea was a cost management system admittedly at the expense of a free flow of ideas. It’s persistence across information sharing technology likely reveals the size, influence, and power of the journal industry. Also a cost saving measure is how the review process itself has been compromised. Here is an example of what the idea of peer review was.
The peer review of Callendar 1938, the mother of all climate change papers.
https://tambonthongchai.com/2018/06/29/peer-review-comments-on-callendar-1938/
Its
“it was a shame that in mathematics it’s been a tradition for hundreds of years to make papers as formal and austere as possible, often suppressing the very visual aids that mathematicians use to make their discoveries.”
It started with Descartes with his analytic geometry. He transformed geometrical diagrams into algebraic equations:
Circle
x^2 + y^2 = a^2
Ellipse
x^2 / a^2 + y^2 / b^2 = 1
Parabola
y^2 = 4 a x
Hyperbola
x^2 / a^2 – y^2 / b^2 = 1
Newton used geometrical diagrams in his Principia Mathematica. But Lagrange used equations in his Analytical Mechanics (a.k.a. Lagrangian mechanics) He boasted he gave lectures in mechanics without drawing a single diagram.
Euclid drew geometrical diagrams. Hilbert replaced the diagrams with pure axioms and mathematical logic. He boasted you could replace the lines and points in geometry with tables and chairs and his axioms would still be true.
Call me old-fashioned but I like drawing diagrams. As an engineering undergrad, I remember my professor gave a purely analytical solution to a problem in mechanics. He used kinematic equations and matrices. His solution occupied the whole white board.
I used a graphical solution to vectors and good old trigonometry. We got the same answers but my solution is easier and shorter. I wrote it all on 1/3 of the white board.
And because of Descartes’ algebraic approach, he totally missed what Leibniz discovered – vis viva, colloquially called kinetic energy. Just look at Descartes’ hilarious 7 rules for billiard ball collisions! Talk about fake physics! Leibniz had great fun with this – Descartes’ “I think therefore I am” meaning I think of diner therefore I am… Awesome!
It was Kepler before him who pointed out this mistake – something outside geometry, namely a physical principle, universal gravitation, or force, knowable yet not in the arithmetic whether Ptolemy, Copernicus, nor Brahe.
It is simply stunning Russell would try that later with Hilbert until Gödel had to again demolish the rubbish.
Looks like the lead here is again a call to Descartes resurrection, or Russell rehabilitation.
The Copenhagen “interpretation” with statistics is simply the Russell program (with lipstick).
How to deal with the glaring paradox at the core of physics – non-locality? Einstein with EPR put this question on the table. Bohm made it even more explicit after a chat with Einstein – both of whom were not satisfied with the exposition (see Hiley). To unravel this J.S. Bell of CERN wrote various papers with the “famous” inequality. It is something that needs urgent attention. Einstein wrote that our concept of causality is currently very limited and pointed to music – the causality of a Bach Fugue, for example, hints at these problems.
So much for (Principia) Mathematica.
Non-locality has become a dogma. Anyone who questions it gets ad hominem from dogmatists. I formulated a theorem to show the flaw in Bell’s theorem. I was pleased to learn that a physicist from Oxford had done the same before me. I corresponded with Dr. Christian since his paper has some similarity with my theorem although I’ve done it independently.
Here’s his paper published in the journal of the Royal Society
https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.180526
haha is Joy Christian still a doctor I thought they were going to revoked his degree.
They even set him up a Crackpot Randii challenge which Sascha Vongehr setup for him
https://www.science20.com/alpha_meme/official_quantum_randi_challenge-80168
Still waiting to see him do the simple challenge 🙂
BTW if you want to know what is wrong with the paper it’s trivial the pairs are anti-correlated they don’t even depend on the local reality a and b and the correlation answer is -2 it’s defined before you start.
More pseudo junk from Joy who has been told the problem over and over again but seems to thick to work out the problem.
I have not read the comments, so perhaps someone has already said this, but anyway. I dont think there is anything wrong with the scientific paper. I think mostly what is obsolete is what people are calling research. It something needs huge amounts of statistical analysis it probably means whatever the result is is not much use anyway. Pity science is government funded. I think a lot of govt funded science is just excuses for not getting a proper job and to get the mortgage paid. It research is about global warming its about BS. Much of pharmaceutical company research is also BS to comply with government research and cover up much of the truth. Also it appears that the more difficult the language and obscure the terminology the better academics like it. God forbid they write a paper that clearly explained what they have done rather than obscure it with goobledygook. And yes, I have a science degree, Ive worked in government scientific research, and yes, I have a very modest publication record.